00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 206 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3708 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.194 Fetching changes from the remote Git repository 00:00:00.197 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.419 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.429 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.440 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.440 > git config core.sparsecheckout # timeout=10 00:00:08.451 > git read-tree -mu HEAD # timeout=10 00:00:08.467 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.490 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.490 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.618 [Pipeline] Start of Pipeline 00:00:08.631 [Pipeline] library 00:00:08.632 Loading library shm_lib@master 00:00:08.633 Library shm_lib@master is cached. Copying from home. 00:00:08.647 [Pipeline] node 00:00:08.658 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.659 [Pipeline] { 00:00:08.669 [Pipeline] catchError 00:00:08.671 [Pipeline] { 00:00:08.681 [Pipeline] wrap 00:00:08.688 [Pipeline] { 00:00:08.697 [Pipeline] stage 00:00:08.699 [Pipeline] { (Prologue) 00:00:08.715 [Pipeline] echo 00:00:08.717 Node: VM-host-SM9 00:00:08.723 [Pipeline] cleanWs 00:00:08.732 [WS-CLEANUP] Deleting project workspace... 00:00:08.732 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.739 [WS-CLEANUP] done 00:00:08.926 [Pipeline] setCustomBuildProperty 00:00:09.069 [Pipeline] httpRequest 00:00:09.689 [Pipeline] echo 00:00:09.691 Sorcerer 10.211.164.20 is alive 00:00:09.701 [Pipeline] retry 00:00:09.703 [Pipeline] { 00:00:09.715 [Pipeline] httpRequest 00:00:09.719 HttpMethod: GET 00:00:09.719 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.720 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.721 Response Code: HTTP/1.1 200 OK 00:00:09.722 Success: Status code 200 is in the accepted range: 200,404 00:00:09.722 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.191 [Pipeline] } 00:00:11.208 [Pipeline] // retry 00:00:11.215 [Pipeline] sh 00:00:11.494 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.508 [Pipeline] httpRequest 00:00:12.298 [Pipeline] echo 00:00:12.300 Sorcerer 10.211.164.20 is alive 00:00:12.306 [Pipeline] retry 00:00:12.307 [Pipeline] { 00:00:12.315 [Pipeline] httpRequest 00:00:12.318 HttpMethod: GET 00:00:12.319 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:12.319 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:12.335 Response Code: HTTP/1.1 200 OK 00:00:12.335 Success: Status code 200 is in the accepted range: 200,404 00:00:12.336 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:54.828 [Pipeline] } 00:00:54.851 [Pipeline] // retry 00:00:54.860 [Pipeline] sh 00:00:55.146 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:57.693 [Pipeline] sh 00:00:57.974 + git -C spdk log --oneline -n5 00:00:57.974 b18e1bd62 version: v24.09.1-pre 00:00:57.974 19524ad45 version: v24.09 00:00:57.974 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:57.974 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:57.974 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:57.995 [Pipeline] withCredentials 00:00:58.006 > git --version # timeout=10 00:00:58.021 > git --version # 'git version 2.39.2' 00:00:58.037 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:58.040 [Pipeline] { 00:00:58.050 [Pipeline] retry 00:00:58.053 [Pipeline] { 00:00:58.069 [Pipeline] sh 00:00:58.350 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:58.362 [Pipeline] } 00:00:58.380 [Pipeline] // retry 00:00:58.385 [Pipeline] } 00:00:58.403 [Pipeline] // withCredentials 00:00:58.414 [Pipeline] httpRequest 00:00:58.792 [Pipeline] echo 00:00:58.794 Sorcerer 10.211.164.20 is alive 00:00:58.804 [Pipeline] retry 00:00:58.806 [Pipeline] { 00:00:58.821 [Pipeline] httpRequest 00:00:58.826 HttpMethod: GET 00:00:58.826 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.827 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.828 Response Code: HTTP/1.1 200 OK 00:00:58.829 Success: Status code 200 is in the accepted range: 200,404 00:00:58.829 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.645 [Pipeline] } 00:01:07.661 [Pipeline] // retry 00:01:07.669 [Pipeline] sh 00:01:07.947 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:09.338 [Pipeline] sh 00:01:09.621 + git -C dpdk log --oneline -n5 00:01:09.622 caf0f5d395 version: 22.11.4 00:01:09.622 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:09.622 dc9c799c7d vhost: fix missing spinlock unlock 00:01:09.622 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:09.622 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:09.641 [Pipeline] writeFile 00:01:09.657 [Pipeline] sh 00:01:09.940 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.952 [Pipeline] sh 00:01:10.235 + cat autorun-spdk.conf 00:01:10.235 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.235 SPDK_TEST_NVME=1 00:01:10.235 SPDK_TEST_FTL=1 00:01:10.235 SPDK_TEST_ISAL=1 00:01:10.235 SPDK_RUN_ASAN=1 00:01:10.235 SPDK_RUN_UBSAN=1 00:01:10.235 SPDK_TEST_XNVME=1 00:01:10.235 SPDK_TEST_NVME_FDP=1 00:01:10.235 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:10.235 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:10.235 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.242 RUN_NIGHTLY=1 00:01:10.245 [Pipeline] } 00:01:10.263 [Pipeline] // stage 00:01:10.279 [Pipeline] stage 00:01:10.282 [Pipeline] { (Run VM) 00:01:10.298 [Pipeline] sh 00:01:10.606 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.606 + echo 'Start stage prepare_nvme.sh' 00:01:10.606 Start stage prepare_nvme.sh 00:01:10.606 + [[ -n 2 ]] 00:01:10.606 + disk_prefix=ex2 00:01:10.606 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:10.606 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:10.606 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:10.606 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.606 ++ SPDK_TEST_NVME=1 00:01:10.606 ++ SPDK_TEST_FTL=1 00:01:10.606 ++ SPDK_TEST_ISAL=1 00:01:10.606 ++ SPDK_RUN_ASAN=1 00:01:10.606 ++ SPDK_RUN_UBSAN=1 00:01:10.606 ++ SPDK_TEST_XNVME=1 00:01:10.606 ++ SPDK_TEST_NVME_FDP=1 00:01:10.606 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:10.606 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:10.606 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.606 ++ RUN_NIGHTLY=1 00:01:10.606 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:10.606 + nvme_files=() 00:01:10.606 + declare -A nvme_files 00:01:10.606 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.606 + nvme_files['nvme.img']=5G 00:01:10.606 + nvme_files['nvme-cmb.img']=5G 00:01:10.606 + nvme_files['nvme-multi0.img']=4G 00:01:10.606 + nvme_files['nvme-multi1.img']=4G 00:01:10.606 + nvme_files['nvme-multi2.img']=4G 00:01:10.606 + nvme_files['nvme-openstack.img']=8G 00:01:10.606 + nvme_files['nvme-zns.img']=5G 00:01:10.606 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.606 + (( SPDK_TEST_FTL == 1 )) 00:01:10.606 + nvme_files["nvme-ftl.img"]=6G 00:01:10.606 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.606 + nvme_files["nvme-fdp.img"]=1G 00:01:10.606 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.606 + for nvme in "${!nvme_files[@]}" 00:01:10.606 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:10.606 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.606 + for nvme in "${!nvme_files[@]}" 00:01:10.606 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:10.607 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:10.607 + for nvme in "${!nvme_files[@]}" 00:01:10.607 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:10.607 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.607 + for nvme in "${!nvme_files[@]}" 00:01:10.607 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:10.607 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.607 + for nvme in "${!nvme_files[@]}" 00:01:10.607 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:10.607 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.607 + for nvme in "${!nvme_files[@]}" 00:01:10.607 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:10.866 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.866 + for nvme in "${!nvme_files[@]}" 00:01:10.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:10.866 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.866 + for nvme in "${!nvme_files[@]}" 00:01:10.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:10.866 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:10.866 + for nvme in "${!nvme_files[@]}" 00:01:10.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:10.866 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.866 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:11.125 + echo 'End stage prepare_nvme.sh' 00:01:11.125 End stage prepare_nvme.sh 00:01:11.138 [Pipeline] sh 00:01:11.421 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.421 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:11.681 00:01:11.681 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:11.681 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:11.681 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:11.681 HELP=0 00:01:11.681 DRY_RUN=0 00:01:11.681 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:11.681 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:11.681 NVME_AUTO_CREATE=0 00:01:11.681 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:11.681 NVME_CMB=,,,, 00:01:11.681 NVME_PMR=,,,, 00:01:11.681 NVME_ZNS=,,,, 00:01:11.681 NVME_MS=true,,,, 00:01:11.681 NVME_FDP=,,,on, 00:01:11.681 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.681 SPDK_VAGRANT_VMCPU=10 00:01:11.681 SPDK_VAGRANT_VMRAM=12288 00:01:11.681 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.681 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.681 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.681 SPDK_OPENSTACK_NETWORK=0 00:01:11.681 VAGRANT_PACKAGE_BOX=0 00:01:11.681 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.681 FORCE_DISTRO=true 00:01:11.681 VAGRANT_BOX_VERSION= 00:01:11.681 EXTRA_VAGRANTFILES= 00:01:11.681 NIC_MODEL=e1000 00:01:11.681 00:01:11.681 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:11.681 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:14.985 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.245 ==> default: Creating image (snapshot of base box volume). 00:01:15.245 ==> default: Creating domain with the following settings... 00:01:15.245 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733636918_b3293c1bdc6bb4269349 00:01:15.245 ==> default: -- Domain type: kvm 00:01:15.245 ==> default: -- Cpus: 10 00:01:15.245 ==> default: -- Feature: acpi 00:01:15.245 ==> default: -- Feature: apic 00:01:15.245 ==> default: -- Feature: pae 00:01:15.245 ==> default: -- Memory: 12288M 00:01:15.245 ==> default: -- Memory Backing: hugepages: 00:01:15.245 ==> default: -- Management MAC: 00:01:15.245 ==> default: -- Loader: 00:01:15.245 ==> default: -- Nvram: 00:01:15.245 ==> default: -- Base box: spdk/fedora39 00:01:15.245 ==> default: -- Storage pool: default 00:01:15.245 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733636918_b3293c1bdc6bb4269349.img (20G) 00:01:15.245 ==> default: -- Volume Cache: default 00:01:15.245 ==> default: -- Kernel: 00:01:15.245 ==> default: -- Initrd: 00:01:15.245 ==> default: -- Graphics Type: vnc 00:01:15.245 ==> default: -- Graphics Port: -1 00:01:15.245 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.245 ==> default: -- Graphics Password: Not defined 00:01:15.245 ==> default: -- Video Type: cirrus 00:01:15.245 ==> default: -- Video VRAM: 9216 00:01:15.245 ==> default: -- Sound Type: 00:01:15.245 ==> default: -- Keymap: en-us 00:01:15.245 ==> default: -- TPM Path: 00:01:15.245 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.245 ==> default: -- Command line args: 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.245 ==> default: -> value=-drive, 00:01:15.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.245 ==> default: -> value=-drive, 00:01:15.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:15.245 ==> default: -> value=-drive, 00:01:15.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.245 ==> default: -> value=-drive, 00:01:15.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.245 ==> default: -> value=-drive, 00:01:15.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:15.245 ==> default: -> value=-drive, 00:01:15.245 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:15.245 ==> default: -> value=-device, 00:01:15.245 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.505 ==> default: Creating shared folders metadata... 00:01:15.505 ==> default: Starting domain. 00:01:16.888 ==> default: Waiting for domain to get an IP address... 00:01:31.778 ==> default: Waiting for SSH to become available... 00:01:33.156 ==> default: Configuring and enabling network interfaces... 00:01:37.348 default: SSH address: 192.168.121.27:22 00:01:37.349 default: SSH username: vagrant 00:01:37.349 default: SSH auth method: private key 00:01:39.885 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.006 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:52.207 ==> default: Mounting SSHFS shared folder... 00:01:54.114 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:54.114 ==> default: Checking Mount.. 00:01:55.506 ==> default: Folder Successfully Mounted! 00:01:55.506 ==> default: Running provisioner: file... 00:01:56.451 default: ~/.gitconfig => .gitconfig 00:01:56.730 00:01:56.730 SUCCESS! 00:01:56.730 00:01:56.730 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:56.730 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.730 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:56.730 00:01:56.739 [Pipeline] } 00:01:56.754 [Pipeline] // stage 00:01:56.762 [Pipeline] dir 00:01:56.763 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:56.765 [Pipeline] { 00:01:56.777 [Pipeline] catchError 00:01:56.779 [Pipeline] { 00:01:56.791 [Pipeline] sh 00:01:57.070 + vagrant ssh-config --host vagrant 00:01:57.070 + sed -ne /^Host/,$p 00:01:57.070 + tee ssh_conf 00:02:00.360 Host vagrant 00:02:00.360 HostName 192.168.121.27 00:02:00.360 User vagrant 00:02:00.360 Port 22 00:02:00.360 UserKnownHostsFile /dev/null 00:02:00.360 StrictHostKeyChecking no 00:02:00.360 PasswordAuthentication no 00:02:00.360 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:00.360 IdentitiesOnly yes 00:02:00.360 LogLevel FATAL 00:02:00.360 ForwardAgent yes 00:02:00.360 ForwardX11 yes 00:02:00.360 00:02:00.373 [Pipeline] withEnv 00:02:00.375 [Pipeline] { 00:02:00.388 [Pipeline] sh 00:02:00.667 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:00.667 source /etc/os-release 00:02:00.667 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.667 # Minimal, systemd-like check. 00:02:00.667 if [[ -e /.dockerenv ]]; then 00:02:00.667 # Clear garbage from the node's name: 00:02:00.667 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.667 # $HOSTNAME is the actual container id 00:02:00.667 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.667 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.667 # We can assume this is a mount from a host where container is running, 00:02:00.667 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.667 container="$(< /etc/hostname) ($agent)" 00:02:00.667 else 00:02:00.667 # Fallback 00:02:00.667 container=$agent 00:02:00.667 fi 00:02:00.667 fi 00:02:00.667 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.667 00:02:00.936 [Pipeline] } 00:02:00.952 [Pipeline] // withEnv 00:02:00.960 [Pipeline] setCustomBuildProperty 00:02:00.974 [Pipeline] stage 00:02:00.976 [Pipeline] { (Tests) 00:02:00.992 [Pipeline] sh 00:02:01.271 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.543 [Pipeline] sh 00:02:01.821 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.091 [Pipeline] timeout 00:02:02.092 Timeout set to expire in 50 min 00:02:02.094 [Pipeline] { 00:02:02.106 [Pipeline] sh 00:02:02.383 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:02.948 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:02.959 [Pipeline] sh 00:02:03.238 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:03.510 [Pipeline] sh 00:02:03.789 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.060 [Pipeline] sh 00:02:04.388 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:04.388 ++ readlink -f spdk_repo 00:02:04.388 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:04.388 + [[ -n /home/vagrant/spdk_repo ]] 00:02:04.388 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:04.388 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:04.388 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:04.388 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:04.388 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:04.388 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:04.388 + cd /home/vagrant/spdk_repo 00:02:04.388 + source /etc/os-release 00:02:04.388 ++ NAME='Fedora Linux' 00:02:04.388 ++ VERSION='39 (Cloud Edition)' 00:02:04.388 ++ ID=fedora 00:02:04.388 ++ VERSION_ID=39 00:02:04.388 ++ VERSION_CODENAME= 00:02:04.388 ++ PLATFORM_ID=platform:f39 00:02:04.388 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:04.388 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.388 ++ LOGO=fedora-logo-icon 00:02:04.388 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:04.388 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.388 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:04.388 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.388 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.388 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.388 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:04.388 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.388 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:04.388 ++ SUPPORT_END=2024-11-12 00:02:04.388 ++ VARIANT='Cloud Edition' 00:02:04.388 ++ VARIANT_ID=cloud 00:02:04.388 + uname -a 00:02:04.388 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:04.388 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:04.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:05.214 Hugepages 00:02:05.214 node hugesize free / total 00:02:05.214 node0 1048576kB 0 / 0 00:02:05.214 node0 2048kB 0 / 0 00:02:05.214 00:02:05.214 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.214 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.214 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.214 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:05.214 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:05.214 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:05.214 + rm -f /tmp/spdk-ld-path 00:02:05.214 + source autorun-spdk.conf 00:02:05.214 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.214 ++ SPDK_TEST_NVME=1 00:02:05.214 ++ SPDK_TEST_FTL=1 00:02:05.214 ++ SPDK_TEST_ISAL=1 00:02:05.214 ++ SPDK_RUN_ASAN=1 00:02:05.214 ++ SPDK_RUN_UBSAN=1 00:02:05.214 ++ SPDK_TEST_XNVME=1 00:02:05.214 ++ SPDK_TEST_NVME_FDP=1 00:02:05.214 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.214 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.214 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.214 ++ RUN_NIGHTLY=1 00:02:05.214 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.214 + [[ -n '' ]] 00:02:05.214 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.214 + for M in /var/spdk/build-*-manifest.txt 00:02:05.214 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.214 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.214 + for M in /var/spdk/build-*-manifest.txt 00:02:05.214 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.214 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.473 + for M in /var/spdk/build-*-manifest.txt 00:02:05.473 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.473 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.473 ++ uname 00:02:05.473 + [[ Linux == \L\i\n\u\x ]] 00:02:05.473 + sudo dmesg -T 00:02:05.473 + sudo dmesg --clear 00:02:05.473 + dmesg_pid=6032 00:02:05.473 + sudo dmesg -Tw 00:02:05.473 + [[ Fedora Linux == FreeBSD ]] 00:02:05.473 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.473 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.473 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.473 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.473 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.473 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.473 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.473 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.473 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.473 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.473 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.473 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.473 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.473 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.473 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.473 Test configuration: 00:02:05.473 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.473 SPDK_TEST_NVME=1 00:02:05.473 SPDK_TEST_FTL=1 00:02:05.473 SPDK_TEST_ISAL=1 00:02:05.473 SPDK_RUN_ASAN=1 00:02:05.473 SPDK_RUN_UBSAN=1 00:02:05.473 SPDK_TEST_XNVME=1 00:02:05.473 SPDK_TEST_NVME_FDP=1 00:02:05.473 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.473 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.473 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.473 RUN_NIGHTLY=1 05:49:28 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:05.473 05:49:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.473 05:49:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:05.473 05:49:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.473 05:49:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.473 05:49:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.473 05:49:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.473 05:49:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.473 05:49:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.473 05:49:28 -- paths/export.sh@5 -- $ export PATH 00:02:05.473 05:49:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.473 05:49:28 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.473 05:49:28 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:05.473 05:49:28 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733636968.XXXXXX 00:02:05.473 05:49:28 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733636968.XnkifC 00:02:05.473 05:49:28 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:05.473 05:49:28 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:05.473 05:49:28 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.473 05:49:28 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:05.473 05:49:28 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.473 05:49:28 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.473 05:49:28 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:05.473 05:49:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:05.473 05:49:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.473 05:49:28 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:05.473 05:49:28 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:05.473 05:49:28 -- pm/common@17 -- $ local monitor 00:02:05.473 05:49:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.473 05:49:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.473 05:49:28 -- pm/common@25 -- $ sleep 1 00:02:05.473 05:49:28 -- pm/common@21 -- $ date +%s 00:02:05.473 05:49:28 -- pm/common@21 -- $ date +%s 00:02:05.473 05:49:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733636968 00:02:05.474 05:49:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733636968 00:02:05.474 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733636968_collect-cpu-load.pm.log 00:02:05.474 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733636968_collect-vmstat.pm.log 00:02:06.853 05:49:29 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:06.853 05:49:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.853 05:49:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.853 05:49:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:06.853 05:49:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.853 Sun Dec 8 05:49:29 AM UTC 2024 00:02:06.853 05:49:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.853 v24.09-1-gb18e1bd62 00:02:06.853 05:49:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:06.853 05:49:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:06.853 05:49:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:06.853 05:49:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.853 05:49:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.853 ************************************ 00:02:06.853 START TEST asan 00:02:06.853 ************************************ 00:02:06.853 using asan 00:02:06.853 05:49:29 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:06.853 00:02:06.853 real 0m0.000s 00:02:06.854 user 0m0.000s 00:02:06.854 sys 0m0.000s 00:02:06.854 05:49:29 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.854 05:49:29 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.854 ************************************ 00:02:06.854 END TEST asan 00:02:06.854 ************************************ 00:02:06.854 05:49:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.854 05:49:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.854 05:49:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:06.854 05:49:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.854 05:49:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.854 ************************************ 00:02:06.854 START TEST ubsan 00:02:06.854 ************************************ 00:02:06.854 using ubsan 00:02:06.854 05:49:29 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:06.854 00:02:06.854 real 0m0.000s 00:02:06.854 user 0m0.000s 00:02:06.854 sys 0m0.000s 00:02:06.854 05:49:29 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.854 05:49:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.854 ************************************ 00:02:06.854 END TEST ubsan 00:02:06.854 ************************************ 00:02:06.854 05:49:29 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:06.854 05:49:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:06.854 05:49:29 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:06.854 05:49:29 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:06.854 05:49:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.854 05:49:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.854 ************************************ 00:02:06.854 START TEST build_native_dpdk 00:02:06.854 ************************************ 00:02:06.854 05:49:29 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:06.854 caf0f5d395 version: 22.11.4 00:02:06.854 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:06.854 dc9c799c7d vhost: fix missing spinlock unlock 00:02:06.854 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:06.854 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:06.854 patching file config/rte_config.h 00:02:06.854 Hunk #1 succeeded at 60 (offset 1 line). 00:02:06.854 05:49:29 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:06.854 05:49:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:06.855 patching file lib/pcapng/rte_pcapng.c 00:02:06.855 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:06.855 05:49:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:06.855 05:49:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:12.133 The Meson build system 00:02:12.133 Version: 1.5.0 00:02:12.133 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:12.133 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:12.133 Build type: native build 00:02:12.133 Program cat found: YES (/usr/bin/cat) 00:02:12.133 Project name: DPDK 00:02:12.133 Project version: 22.11.4 00:02:12.133 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.133 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:12.133 Host machine cpu family: x86_64 00:02:12.133 Host machine cpu: x86_64 00:02:12.133 Message: ## Building in Developer Mode ## 00:02:12.133 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.133 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:12.133 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.133 Program objdump found: YES (/usr/bin/objdump) 00:02:12.133 Program python3 found: YES (/usr/bin/python3) 00:02:12.133 Program cat found: YES (/usr/bin/cat) 00:02:12.133 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:12.133 Checking for size of "void *" : 8 00:02:12.133 Checking for size of "void *" : 8 (cached) 00:02:12.133 Library m found: YES 00:02:12.133 Library numa found: YES 00:02:12.133 Has header "numaif.h" : YES 00:02:12.133 Library fdt found: NO 00:02:12.133 Library execinfo found: NO 00:02:12.133 Has header "execinfo.h" : YES 00:02:12.133 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.133 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.133 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.133 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.133 Run-time dependency openssl found: YES 3.1.1 00:02:12.133 Run-time dependency libpcap found: YES 1.10.4 00:02:12.134 Has header "pcap.h" with dependency libpcap: YES 00:02:12.134 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.134 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.134 Compiler for C supports arguments -Wformat: YES 00:02:12.134 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.134 Compiler for C supports arguments -Wformat-security: NO 00:02:12.134 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.134 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.134 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.134 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.134 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.134 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.134 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.134 Compiler for C supports arguments -Wundef: YES 00:02:12.134 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.134 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.134 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.134 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.134 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.134 Compiler for C supports arguments -mavx512f: YES 00:02:12.134 Checking if "AVX512 checking" compiles: YES 00:02:12.134 Fetching value of define "__SSE4_2__" : 1 00:02:12.134 Fetching value of define "__AES__" : 1 00:02:12.134 Fetching value of define "__AVX__" : 1 00:02:12.134 Fetching value of define "__AVX2__" : 1 00:02:12.134 Fetching value of define "__AVX512BW__" : (undefined) 00:02:12.134 Fetching value of define "__AVX512CD__" : (undefined) 00:02:12.134 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:12.134 Fetching value of define "__AVX512F__" : (undefined) 00:02:12.134 Fetching value of define "__AVX512VL__" : (undefined) 00:02:12.134 Fetching value of define "__PCLMUL__" : 1 00:02:12.134 Fetching value of define "__RDRND__" : 1 00:02:12.134 Fetching value of define "__RDSEED__" : 1 00:02:12.134 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.134 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.134 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.134 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.134 Checking for function "getentropy" : YES 00:02:12.134 Message: lib/eal: Defining dependency "eal" 00:02:12.134 Message: lib/ring: Defining dependency "ring" 00:02:12.134 Message: lib/rcu: Defining dependency "rcu" 00:02:12.134 Message: lib/mempool: Defining dependency "mempool" 00:02:12.134 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.134 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.134 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.134 Compiler for C supports arguments -mpclmul: YES 00:02:12.134 Compiler for C supports arguments -maes: YES 00:02:12.134 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.134 Compiler for C supports arguments -mavx512bw: YES 00:02:12.134 Compiler for C supports arguments -mavx512dq: YES 00:02:12.134 Compiler for C supports arguments -mavx512vl: YES 00:02:12.134 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.134 Compiler for C supports arguments -mavx2: YES 00:02:12.134 Compiler for C supports arguments -mavx: YES 00:02:12.134 Message: lib/net: Defining dependency "net" 00:02:12.134 Message: lib/meter: Defining dependency "meter" 00:02:12.134 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.134 Message: lib/pci: Defining dependency "pci" 00:02:12.134 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.134 Message: lib/metrics: Defining dependency "metrics" 00:02:12.134 Message: lib/hash: Defining dependency "hash" 00:02:12.134 Message: lib/timer: Defining dependency "timer" 00:02:12.134 Fetching value of define "__AVX2__" : 1 (cached) 00:02:12.134 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.134 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:12.134 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:12.134 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:12.134 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:12.134 Message: lib/acl: Defining dependency "acl" 00:02:12.134 Message: lib/bbdev: Defining dependency "bbdev" 00:02:12.134 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:12.134 Run-time dependency libelf found: YES 0.191 00:02:12.134 Message: lib/bpf: Defining dependency "bpf" 00:02:12.134 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:12.134 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.134 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.134 Message: lib/distributor: Defining dependency "distributor" 00:02:12.134 Message: lib/efd: Defining dependency "efd" 00:02:12.134 Message: lib/eventdev: Defining dependency "eventdev" 00:02:12.134 Message: lib/gpudev: Defining dependency "gpudev" 00:02:12.134 Message: lib/gro: Defining dependency "gro" 00:02:12.134 Message: lib/gso: Defining dependency "gso" 00:02:12.134 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:12.134 Message: lib/jobstats: Defining dependency "jobstats" 00:02:12.134 Message: lib/latencystats: Defining dependency "latencystats" 00:02:12.134 Message: lib/lpm: Defining dependency "lpm" 00:02:12.134 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.134 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:12.134 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:12.134 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:12.134 Message: lib/member: Defining dependency "member" 00:02:12.134 Message: lib/pcapng: Defining dependency "pcapng" 00:02:12.134 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.134 Message: lib/power: Defining dependency "power" 00:02:12.134 Message: lib/rawdev: Defining dependency "rawdev" 00:02:12.134 Message: lib/regexdev: Defining dependency "regexdev" 00:02:12.134 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.134 Message: lib/rib: Defining dependency "rib" 00:02:12.134 Message: lib/reorder: Defining dependency "reorder" 00:02:12.134 Message: lib/sched: Defining dependency "sched" 00:02:12.134 Message: lib/security: Defining dependency "security" 00:02:12.134 Message: lib/stack: Defining dependency "stack" 00:02:12.134 Has header "linux/userfaultfd.h" : YES 00:02:12.134 Message: lib/vhost: Defining dependency "vhost" 00:02:12.134 Message: lib/ipsec: Defining dependency "ipsec" 00:02:12.134 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.134 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:12.134 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:12.134 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:12.134 Message: lib/fib: Defining dependency "fib" 00:02:12.134 Message: lib/port: Defining dependency "port" 00:02:12.134 Message: lib/pdump: Defining dependency "pdump" 00:02:12.134 Message: lib/table: Defining dependency "table" 00:02:12.134 Message: lib/pipeline: Defining dependency "pipeline" 00:02:12.134 Message: lib/graph: Defining dependency "graph" 00:02:12.134 Message: lib/node: Defining dependency "node" 00:02:12.134 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.134 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.134 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.134 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.134 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:12.134 Compiler for C supports arguments -Wno-unused-value: YES 00:02:12.134 Compiler for C supports arguments -Wno-format: YES 00:02:12.134 Compiler for C supports arguments -Wno-format-security: YES 00:02:12.134 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:13.518 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:13.518 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:13.519 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:13.519 Fetching value of define "__AVX2__" : 1 (cached) 00:02:13.519 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:13.519 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.519 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:13.519 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:13.519 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:13.519 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.519 Configuring doxy-api.conf using configuration 00:02:13.519 Program sphinx-build found: NO 00:02:13.519 Configuring rte_build_config.h using configuration 00:02:13.519 Message: 00:02:13.519 ================= 00:02:13.519 Applications Enabled 00:02:13.519 ================= 00:02:13.519 00:02:13.519 apps: 00:02:13.519 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:13.519 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:13.519 test-security-perf, 00:02:13.519 00:02:13.519 Message: 00:02:13.519 ================= 00:02:13.519 Libraries Enabled 00:02:13.519 ================= 00:02:13.519 00:02:13.519 libs: 00:02:13.519 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:13.519 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:13.519 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:13.519 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:13.519 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:13.519 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:13.519 table, pipeline, graph, node, 00:02:13.519 00:02:13.519 Message: 00:02:13.519 =============== 00:02:13.519 Drivers Enabled 00:02:13.519 =============== 00:02:13.519 00:02:13.519 common: 00:02:13.519 00:02:13.519 bus: 00:02:13.519 pci, vdev, 00:02:13.519 mempool: 00:02:13.519 ring, 00:02:13.519 dma: 00:02:13.519 00:02:13.519 net: 00:02:13.519 i40e, 00:02:13.519 raw: 00:02:13.519 00:02:13.519 crypto: 00:02:13.519 00:02:13.519 compress: 00:02:13.519 00:02:13.519 regex: 00:02:13.519 00:02:13.519 vdpa: 00:02:13.519 00:02:13.519 event: 00:02:13.519 00:02:13.519 baseband: 00:02:13.519 00:02:13.519 gpu: 00:02:13.519 00:02:13.519 00:02:13.519 Message: 00:02:13.519 ================= 00:02:13.519 Content Skipped 00:02:13.519 ================= 00:02:13.519 00:02:13.519 apps: 00:02:13.519 00:02:13.519 libs: 00:02:13.519 kni: explicitly disabled via build config (deprecated lib) 00:02:13.519 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:13.519 00:02:13.519 drivers: 00:02:13.519 common/cpt: not in enabled drivers build config 00:02:13.519 common/dpaax: not in enabled drivers build config 00:02:13.519 common/iavf: not in enabled drivers build config 00:02:13.519 common/idpf: not in enabled drivers build config 00:02:13.519 common/mvep: not in enabled drivers build config 00:02:13.519 common/octeontx: not in enabled drivers build config 00:02:13.519 bus/auxiliary: not in enabled drivers build config 00:02:13.519 bus/dpaa: not in enabled drivers build config 00:02:13.519 bus/fslmc: not in enabled drivers build config 00:02:13.519 bus/ifpga: not in enabled drivers build config 00:02:13.519 bus/vmbus: not in enabled drivers build config 00:02:13.519 common/cnxk: not in enabled drivers build config 00:02:13.519 common/mlx5: not in enabled drivers build config 00:02:13.519 common/qat: not in enabled drivers build config 00:02:13.519 common/sfc_efx: not in enabled drivers build config 00:02:13.519 mempool/bucket: not in enabled drivers build config 00:02:13.519 mempool/cnxk: not in enabled drivers build config 00:02:13.519 mempool/dpaa: not in enabled drivers build config 00:02:13.519 mempool/dpaa2: not in enabled drivers build config 00:02:13.519 mempool/octeontx: not in enabled drivers build config 00:02:13.519 mempool/stack: not in enabled drivers build config 00:02:13.519 dma/cnxk: not in enabled drivers build config 00:02:13.519 dma/dpaa: not in enabled drivers build config 00:02:13.519 dma/dpaa2: not in enabled drivers build config 00:02:13.519 dma/hisilicon: not in enabled drivers build config 00:02:13.519 dma/idxd: not in enabled drivers build config 00:02:13.519 dma/ioat: not in enabled drivers build config 00:02:13.519 dma/skeleton: not in enabled drivers build config 00:02:13.519 net/af_packet: not in enabled drivers build config 00:02:13.519 net/af_xdp: not in enabled drivers build config 00:02:13.519 net/ark: not in enabled drivers build config 00:02:13.519 net/atlantic: not in enabled drivers build config 00:02:13.519 net/avp: not in enabled drivers build config 00:02:13.519 net/axgbe: not in enabled drivers build config 00:02:13.519 net/bnx2x: not in enabled drivers build config 00:02:13.519 net/bnxt: not in enabled drivers build config 00:02:13.519 net/bonding: not in enabled drivers build config 00:02:13.519 net/cnxk: not in enabled drivers build config 00:02:13.519 net/cxgbe: not in enabled drivers build config 00:02:13.519 net/dpaa: not in enabled drivers build config 00:02:13.519 net/dpaa2: not in enabled drivers build config 00:02:13.519 net/e1000: not in enabled drivers build config 00:02:13.519 net/ena: not in enabled drivers build config 00:02:13.519 net/enetc: not in enabled drivers build config 00:02:13.519 net/enetfec: not in enabled drivers build config 00:02:13.519 net/enic: not in enabled drivers build config 00:02:13.519 net/failsafe: not in enabled drivers build config 00:02:13.519 net/fm10k: not in enabled drivers build config 00:02:13.519 net/gve: not in enabled drivers build config 00:02:13.519 net/hinic: not in enabled drivers build config 00:02:13.519 net/hns3: not in enabled drivers build config 00:02:13.519 net/iavf: not in enabled drivers build config 00:02:13.519 net/ice: not in enabled drivers build config 00:02:13.519 net/idpf: not in enabled drivers build config 00:02:13.519 net/igc: not in enabled drivers build config 00:02:13.519 net/ionic: not in enabled drivers build config 00:02:13.519 net/ipn3ke: not in enabled drivers build config 00:02:13.519 net/ixgbe: not in enabled drivers build config 00:02:13.519 net/kni: not in enabled drivers build config 00:02:13.519 net/liquidio: not in enabled drivers build config 00:02:13.519 net/mana: not in enabled drivers build config 00:02:13.519 net/memif: not in enabled drivers build config 00:02:13.519 net/mlx4: not in enabled drivers build config 00:02:13.519 net/mlx5: not in enabled drivers build config 00:02:13.519 net/mvneta: not in enabled drivers build config 00:02:13.519 net/mvpp2: not in enabled drivers build config 00:02:13.519 net/netvsc: not in enabled drivers build config 00:02:13.519 net/nfb: not in enabled drivers build config 00:02:13.519 net/nfp: not in enabled drivers build config 00:02:13.519 net/ngbe: not in enabled drivers build config 00:02:13.519 net/null: not in enabled drivers build config 00:02:13.519 net/octeontx: not in enabled drivers build config 00:02:13.519 net/octeon_ep: not in enabled drivers build config 00:02:13.519 net/pcap: not in enabled drivers build config 00:02:13.519 net/pfe: not in enabled drivers build config 00:02:13.519 net/qede: not in enabled drivers build config 00:02:13.519 net/ring: not in enabled drivers build config 00:02:13.519 net/sfc: not in enabled drivers build config 00:02:13.519 net/softnic: not in enabled drivers build config 00:02:13.519 net/tap: not in enabled drivers build config 00:02:13.519 net/thunderx: not in enabled drivers build config 00:02:13.519 net/txgbe: not in enabled drivers build config 00:02:13.519 net/vdev_netvsc: not in enabled drivers build config 00:02:13.519 net/vhost: not in enabled drivers build config 00:02:13.519 net/virtio: not in enabled drivers build config 00:02:13.519 net/vmxnet3: not in enabled drivers build config 00:02:13.519 raw/cnxk_bphy: not in enabled drivers build config 00:02:13.519 raw/cnxk_gpio: not in enabled drivers build config 00:02:13.519 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:13.519 raw/ifpga: not in enabled drivers build config 00:02:13.519 raw/ntb: not in enabled drivers build config 00:02:13.519 raw/skeleton: not in enabled drivers build config 00:02:13.519 crypto/armv8: not in enabled drivers build config 00:02:13.519 crypto/bcmfs: not in enabled drivers build config 00:02:13.519 crypto/caam_jr: not in enabled drivers build config 00:02:13.519 crypto/ccp: not in enabled drivers build config 00:02:13.519 crypto/cnxk: not in enabled drivers build config 00:02:13.519 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.519 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.519 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.519 crypto/mlx5: not in enabled drivers build config 00:02:13.519 crypto/mvsam: not in enabled drivers build config 00:02:13.519 crypto/nitrox: not in enabled drivers build config 00:02:13.519 crypto/null: not in enabled drivers build config 00:02:13.519 crypto/octeontx: not in enabled drivers build config 00:02:13.519 crypto/openssl: not in enabled drivers build config 00:02:13.519 crypto/scheduler: not in enabled drivers build config 00:02:13.519 crypto/uadk: not in enabled drivers build config 00:02:13.519 crypto/virtio: not in enabled drivers build config 00:02:13.519 compress/isal: not in enabled drivers build config 00:02:13.519 compress/mlx5: not in enabled drivers build config 00:02:13.519 compress/octeontx: not in enabled drivers build config 00:02:13.519 compress/zlib: not in enabled drivers build config 00:02:13.519 regex/mlx5: not in enabled drivers build config 00:02:13.519 regex/cn9k: not in enabled drivers build config 00:02:13.519 vdpa/ifc: not in enabled drivers build config 00:02:13.519 vdpa/mlx5: not in enabled drivers build config 00:02:13.519 vdpa/sfc: not in enabled drivers build config 00:02:13.519 event/cnxk: not in enabled drivers build config 00:02:13.519 event/dlb2: not in enabled drivers build config 00:02:13.519 event/dpaa: not in enabled drivers build config 00:02:13.520 event/dpaa2: not in enabled drivers build config 00:02:13.520 event/dsw: not in enabled drivers build config 00:02:13.520 event/opdl: not in enabled drivers build config 00:02:13.520 event/skeleton: not in enabled drivers build config 00:02:13.520 event/sw: not in enabled drivers build config 00:02:13.520 event/octeontx: not in enabled drivers build config 00:02:13.520 baseband/acc: not in enabled drivers build config 00:02:13.520 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:13.520 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:13.520 baseband/la12xx: not in enabled drivers build config 00:02:13.520 baseband/null: not in enabled drivers build config 00:02:13.520 baseband/turbo_sw: not in enabled drivers build config 00:02:13.520 gpu/cuda: not in enabled drivers build config 00:02:13.520 00:02:13.520 00:02:13.520 Build targets in project: 314 00:02:13.520 00:02:13.520 DPDK 22.11.4 00:02:13.520 00:02:13.520 User defined options 00:02:13.520 libdir : lib 00:02:13.520 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:13.520 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:13.520 c_link_args : 00:02:13.520 enable_docs : false 00:02:13.520 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:13.520 enable_kmods : false 00:02:13.520 machine : native 00:02:13.520 tests : false 00:02:13.520 00:02:13.520 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.520 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:13.779 05:49:36 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:13.779 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:13.779 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:13.779 [2/743] Generating lib/rte_kvargs_def with a custom command 00:02:13.779 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:13.779 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:13.779 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.779 [6/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.779 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.779 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.779 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.779 [10/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.779 [11/743] Linking static target lib/librte_kvargs.a 00:02:14.038 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.038 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.038 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.038 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.038 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.038 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.038 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.038 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.038 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.296 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:14.296 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.297 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:14.297 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.297 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.297 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.297 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.297 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.297 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.297 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.297 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.297 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.297 [33/743] Linking static target lib/librte_telemetry.a 00:02:14.297 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.555 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.556 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.556 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.556 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:14.556 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.556 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.556 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.814 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.814 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.814 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.814 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.814 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:14.814 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.814 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.815 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.815 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:15.073 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.073 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.073 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.073 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.073 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.073 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.073 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.073 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.073 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.073 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.073 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.073 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.073 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.073 [64/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:15.073 [65/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.073 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.332 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.332 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.332 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.332 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.332 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.332 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.332 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.332 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.332 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.332 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.332 [77/743] Generating lib/rte_eal_def with a custom command 00:02:15.332 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:15.332 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.332 [80/743] Generating lib/rte_ring_def with a custom command 00:02:15.332 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.332 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:15.332 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:15.332 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.590 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:15.590 [86/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.590 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.590 [88/743] Linking static target lib/librte_ring.a 00:02:15.590 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.590 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:15.590 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:15.590 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.848 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.848 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.848 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.848 [96/743] Linking static target lib/librte_eal.a 00:02:16.105 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.105 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:16.105 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.105 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.105 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:16.363 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.363 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.363 [104/743] Linking static target lib/librte_rcu.a 00:02:16.363 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.621 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.621 [107/743] Linking static target lib/librte_mempool.a 00:02:16.621 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.621 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.621 [110/743] Generating lib/rte_net_def with a custom command 00:02:16.621 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:16.621 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.621 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.879 [114/743] Generating lib/rte_meter_def with a custom command 00:02:16.879 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:16.879 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.879 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.879 [118/743] Linking static target lib/librte_meter.a 00:02:16.879 [119/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.137 [120/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.137 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.137 [122/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.137 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.137 [124/743] Linking static target lib/librte_net.a 00:02:17.137 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.137 [126/743] Linking static target lib/librte_mbuf.a 00:02:17.395 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.395 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.653 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.653 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.653 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.653 [132/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.653 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.910 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.910 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.474 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.474 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:18.474 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:18.474 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.474 [140/743] Generating lib/rte_pci_def with a custom command 00:02:18.474 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.474 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:18.474 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.474 [144/743] Linking static target lib/librte_pci.a 00:02:18.474 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.474 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.474 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.474 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.732 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.732 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.732 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.732 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.732 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.732 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.732 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.732 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.732 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:18.732 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:18.732 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.990 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:18.990 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.990 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:18.990 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.990 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:18.990 [165/743] Generating lib/rte_hash_def with a custom command 00:02:18.990 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:18.990 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.990 [168/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.990 [169/743] Generating lib/rte_timer_def with a custom command 00:02:18.990 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:19.247 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.247 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.247 [173/743] Linking static target lib/librte_cmdline.a 00:02:19.507 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:19.507 [175/743] Linking static target lib/librte_metrics.a 00:02:19.507 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.507 [177/743] Linking static target lib/librte_timer.a 00:02:19.766 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.766 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.025 [180/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.025 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.025 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.283 [183/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.283 [184/743] Linking static target lib/librte_ethdev.a 00:02:20.542 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.542 [186/743] Generating lib/rte_acl_def with a custom command 00:02:20.542 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.542 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:20.542 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:20.801 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:20.801 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.801 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:20.801 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:21.060 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:21.319 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:21.319 [196/743] Linking static target lib/librte_bitratestats.a 00:02:21.670 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:21.670 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.670 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:21.670 [200/743] Linking static target lib/librte_bbdev.a 00:02:21.670 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:21.928 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.928 [203/743] Linking static target lib/librte_hash.a 00:02:22.185 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.185 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:22.185 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:22.185 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.185 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:22.444 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:22.701 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.701 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:22.701 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:22.701 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:22.701 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:22.701 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:22.701 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:22.959 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:22.959 [218/743] Linking static target lib/librte_acl.a 00:02:22.959 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:22.959 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:22.959 [221/743] Linking static target lib/librte_cfgfile.a 00:02:23.217 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:23.217 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:23.217 [224/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.217 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:23.217 [226/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.217 [227/743] Linking target lib/librte_eal.so.23.0 00:02:23.217 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.217 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:23.476 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:23.476 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:23.476 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:23.476 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:23.476 [234/743] Linking target lib/librte_ring.so.23.0 00:02:23.476 [235/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:23.476 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:23.476 [237/743] Linking target lib/librte_rcu.so.23.0 00:02:23.476 [238/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:23.476 [239/743] Linking target lib/librte_mempool.so.23.0 00:02:23.734 [240/743] Linking target lib/librte_meter.so.23.0 00:02:23.734 [241/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.734 [242/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:23.734 [243/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:23.734 [244/743] Linking target lib/librte_pci.so.23.0 00:02:23.734 [245/743] Linking target lib/librte_timer.so.23.0 00:02:23.734 [246/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:23.734 [247/743] Linking target lib/librte_mbuf.so.23.0 00:02:23.734 [248/743] Linking target lib/librte_acl.so.23.0 00:02:23.734 [249/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.734 [250/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:23.992 [251/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:23.992 [252/743] Linking static target lib/librte_bpf.a 00:02:23.992 [253/743] Linking static target lib/librte_compressdev.a 00:02:23.993 [254/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:23.993 [255/743] Linking target lib/librte_cfgfile.so.23.0 00:02:23.993 [256/743] Linking target lib/librte_net.so.23.0 00:02:23.993 [257/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:23.993 [258/743] Linking target lib/librte_bbdev.so.23.0 00:02:23.993 [259/743] Generating lib/rte_distributor_def with a custom command 00:02:23.993 [260/743] Generating lib/rte_distributor_mingw with a custom command 00:02:23.993 [261/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:23.993 [262/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.993 [263/743] Linking target lib/librte_cmdline.so.23.0 00:02:23.993 [264/743] Linking target lib/librte_hash.so.23.0 00:02:24.274 [265/743] Generating lib/rte_efd_def with a custom command 00:02:24.274 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.274 [267/743] Generating lib/rte_efd_mingw with a custom command 00:02:24.274 [268/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.274 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:24.274 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.532 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.532 [272/743] Linking static target lib/librte_distributor.a 00:02:24.791 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.791 [274/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.791 [275/743] Linking target lib/librte_compressdev.so.23.0 00:02:24.791 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.791 [277/743] Linking target lib/librte_distributor.so.23.0 00:02:24.791 [278/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.049 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.049 [280/743] Generating lib/rte_eventdev_def with a custom command 00:02:25.049 [281/743] Linking target lib/librte_ethdev.so.23.0 00:02:25.049 [282/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:25.049 [283/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:25.049 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:25.309 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:25.309 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:25.309 [287/743] Linking target lib/librte_bpf.so.23.0 00:02:25.309 [288/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.309 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:25.568 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:25.568 [291/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:25.568 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.568 [293/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.568 [294/743] Linking static target lib/librte_efd.a 00:02:25.826 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.826 [296/743] Linking static target lib/librte_cryptodev.a 00:02:25.826 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:25.826 [298/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.826 [299/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.826 [300/743] Linking static target lib/librte_gpudev.a 00:02:25.826 [301/743] Linking target lib/librte_efd.so.23.0 00:02:26.085 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.085 [303/743] Generating lib/rte_gro_def with a custom command 00:02:26.085 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:26.342 [305/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:26.342 [306/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:26.343 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.343 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.600 [309/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.600 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.858 [311/743] Linking target lib/librte_gpudev.so.23.0 00:02:26.858 [312/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.858 [313/743] Generating lib/rte_gso_def with a custom command 00:02:26.858 [314/743] Linking static target lib/librte_gro.a 00:02:26.858 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:26.858 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:27.115 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:27.115 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:27.115 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.115 [320/743] Linking target lib/librte_gro.so.23.0 00:02:27.115 [321/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:27.115 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:27.115 [323/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:27.115 [324/743] Linking static target lib/librte_eventdev.a 00:02:27.115 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:02:27.115 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:27.373 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.373 [328/743] Linking static target lib/librte_jobstats.a 00:02:27.373 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:27.373 [330/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:27.373 [331/743] Linking static target lib/librte_gso.a 00:02:27.373 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:27.630 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.630 [334/743] Linking target lib/librte_gso.so.23.0 00:02:27.630 [335/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.630 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:27.630 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:27.630 [338/743] Linking target lib/librte_jobstats.so.23.0 00:02:27.630 [339/743] Generating lib/rte_latencystats_def with a custom command 00:02:27.630 [340/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:27.889 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:27.889 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:27.889 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.889 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:27.889 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:27.889 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:27.889 [347/743] Linking static target lib/librte_ip_frag.a 00:02:27.889 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.147 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:28.147 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:28.147 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.405 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:28.405 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:28.405 [354/743] Linking static target lib/librte_latencystats.a 00:02:28.405 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:28.405 [356/743] Generating lib/rte_member_def with a custom command 00:02:28.405 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:28.405 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:28.663 [359/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:28.663 [360/743] Generating lib/rte_member_mingw with a custom command 00:02:28.663 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:28.663 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:28.663 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:28.663 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.663 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.663 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:28.663 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.663 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.920 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:28.920 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.920 [371/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.178 [372/743] Linking target lib/librte_eventdev.so.23.0 00:02:29.178 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:29.178 [374/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:29.178 [375/743] Linking static target lib/librte_lpm.a 00:02:29.178 [376/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:29.178 [377/743] Generating lib/rte_power_def with a custom command 00:02:29.178 [378/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:29.178 [379/743] Generating lib/rte_power_mingw with a custom command 00:02:29.436 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:29.436 [381/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.436 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:29.436 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:29.436 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:29.436 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.436 [386/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.436 [387/743] Generating lib/rte_dmadev_def with a custom command 00:02:29.436 [388/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:29.436 [389/743] Linking static target lib/librte_pcapng.a 00:02:29.436 [390/743] Linking target lib/librte_lpm.so.23.0 00:02:29.436 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:29.695 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:29.695 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.695 [394/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:29.695 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:29.695 [396/743] Generating lib/rte_rib_def with a custom command 00:02:29.695 [397/743] Linking static target lib/librte_rawdev.a 00:02:29.695 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:29.695 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:29.695 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:29.954 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.954 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.954 [403/743] Linking target lib/librte_pcapng.so.23.0 00:02:29.954 [404/743] Linking static target lib/librte_dmadev.a 00:02:29.954 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:29.954 [406/743] Linking static target lib/librte_power.a 00:02:29.954 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:30.212 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.212 [409/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:30.212 [410/743] Linking static target lib/librte_regexdev.a 00:02:30.212 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:30.212 [412/743] Linking target lib/librte_rawdev.so.23.0 00:02:30.212 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:30.212 [414/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.212 [415/743] Linking static target lib/librte_member.a 00:02:30.212 [416/743] Generating lib/rte_sched_def with a custom command 00:02:30.212 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:30.212 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:30.212 [419/743] Generating lib/rte_security_def with a custom command 00:02:30.470 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:30.470 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.470 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:30.470 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:30.470 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:30.470 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:30.470 [426/743] Generating lib/rte_stack_def with a custom command 00:02:30.470 [427/743] Generating lib/rte_stack_mingw with a custom command 00:02:30.470 [428/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:30.470 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:30.470 [430/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.470 [431/743] Linking static target lib/librte_stack.a 00:02:30.470 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.470 [433/743] Linking static target lib/librte_reorder.a 00:02:30.729 [434/743] Linking target lib/librte_member.so.23.0 00:02:30.729 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.729 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.729 [437/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.729 [438/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.729 [439/743] Linking target lib/librte_stack.so.23.0 00:02:30.729 [440/743] Linking target lib/librte_regexdev.so.23.0 00:02:30.729 [441/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.988 [442/743] Linking target lib/librte_reorder.so.23.0 00:02:30.988 [443/743] Linking target lib/librte_power.so.23.0 00:02:30.988 [444/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:30.988 [445/743] Linking static target lib/librte_rib.a 00:02:31.246 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.246 [447/743] Linking static target lib/librte_security.a 00:02:31.246 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.246 [449/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.246 [450/743] Linking target lib/librte_rib.so.23.0 00:02:31.246 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:31.505 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:31.505 [453/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:31.505 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.505 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.505 [456/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.764 [457/743] Linking target lib/librte_security.so.23.0 00:02:31.764 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:31.764 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:31.764 [460/743] Linking static target lib/librte_sched.a 00:02:32.330 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.330 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:32.330 [463/743] Linking target lib/librte_sched.so.23.0 00:02:32.330 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:32.330 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:32.330 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:32.330 [467/743] Generating lib/rte_ipsec_def with a custom command 00:02:32.330 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:32.587 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.587 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:32.587 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:32.844 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:32.844 [473/743] Generating lib/rte_fib_def with a custom command 00:02:32.844 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:33.102 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:33.102 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:33.102 [477/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:33.102 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:33.102 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:33.359 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:33.359 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:33.359 [482/743] Linking static target lib/librte_ipsec.a 00:02:33.617 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.617 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:33.892 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:33.892 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:33.892 [487/743] Linking static target lib/librte_fib.a 00:02:33.892 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:33.892 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:34.150 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:34.150 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:34.150 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.150 [493/743] Linking target lib/librte_fib.so.23.0 00:02:34.408 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:34.974 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:34.974 [496/743] Generating lib/rte_port_def with a custom command 00:02:34.974 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:34.974 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:34.974 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:34.974 [500/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:35.231 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:02:35.231 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:35.231 [503/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:35.231 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:35.231 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:35.489 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:35.489 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:35.489 [508/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:35.489 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:35.489 [510/743] Linking static target lib/librte_port.a 00:02:35.747 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:36.005 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:36.005 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.005 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:36.005 [515/743] Linking target lib/librte_port.so.23.0 00:02:36.263 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:36.263 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:36.263 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:36.263 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:36.263 [520/743] Linking static target lib/librte_pdump.a 00:02:36.520 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.779 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:36.779 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:36.779 [524/743] Generating lib/rte_table_def with a custom command 00:02:36.779 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:36.779 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:37.037 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:37.037 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:37.037 [529/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.295 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:37.295 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:37.295 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:37.295 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:37.553 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:37.554 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:37.554 [536/743] Linking static target lib/librte_table.a 00:02:37.554 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:37.811 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:38.068 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:38.068 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.068 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:38.068 [542/743] Linking target lib/librte_table.so.23.0 00:02:38.326 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:38.326 [544/743] Generating lib/rte_graph_def with a custom command 00:02:38.326 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:38.326 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:38.326 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:38.582 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:38.840 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:38.840 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:38.840 [551/743] Linking static target lib/librte_graph.a 00:02:38.840 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:39.097 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:39.097 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:39.097 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:39.662 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:39.662 [557/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:39.662 [558/743] Generating lib/rte_node_def with a custom command 00:02:39.662 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:39.662 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.662 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.662 [562/743] Linking target lib/librte_graph.so.23.0 00:02:39.662 [563/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:39.920 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.920 [565/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:39.920 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:39.920 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:39.920 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:39.920 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:39.920 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.920 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:40.178 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:40.178 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.178 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:40.178 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:40.178 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:40.178 [577/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.178 [578/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:40.178 [579/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.178 [580/743] Linking static target lib/librte_node.a 00:02:40.178 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.437 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:40.437 [583/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.437 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.437 [585/743] Linking static target drivers/librte_bus_vdev.a 00:02:40.437 [586/743] Linking target lib/librte_node.so.23.0 00:02:40.437 [587/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.437 [588/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.437 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.696 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.696 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:40.696 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:40.696 [593/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.696 [594/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.696 [595/743] Linking static target drivers/librte_bus_pci.a 00:02:40.954 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:41.212 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:41.212 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:41.212 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.212 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:41.212 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:41.471 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:41.471 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.471 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.729 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.729 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.729 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:41.729 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:41.729 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.729 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:41.987 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:42.552 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:42.552 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:42.552 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:42.809 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:43.067 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:43.067 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:43.633 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:43.633 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:43.633 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:43.902 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:43.902 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:43.902 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:43.902 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:44.181 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:45.139 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:45.139 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:45.397 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:45.397 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:45.397 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:45.397 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:45.397 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:45.654 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:45.654 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:45.654 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:45.913 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:46.480 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:46.480 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:46.480 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:46.480 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.480 [641/743] Linking static target lib/librte_vhost.a 00:02:46.480 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:46.738 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:46.738 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:46.738 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.738 [646/743] Linking static target drivers/librte_net_i40e.a 00:02:46.996 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:46.996 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.996 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:46.996 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:47.256 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:47.514 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:47.514 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.514 [654/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:47.514 [655/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:47.773 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:47.774 [657/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.774 [658/743] Linking target lib/librte_vhost.so.23.0 00:02:48.032 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:48.290 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:48.290 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:48.290 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:48.290 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:48.549 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:48.549 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:48.549 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:48.549 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:48.808 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:48.808 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:49.066 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:49.325 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:49.325 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:49.325 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:49.892 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:49.892 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:50.150 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:50.408 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:50.408 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:50.666 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:50.666 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:50.666 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:50.666 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:50.924 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:50.924 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:50.925 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:51.182 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:51.182 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:51.439 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:51.439 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:51.439 [690/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:51.696 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:51.696 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:51.696 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:51.696 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:52.261 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:52.261 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:52.261 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:52.519 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:52.778 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:52.778 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.778 [701/743] Linking static target lib/librte_pipeline.a 00:02:53.344 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:53.344 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:53.344 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:53.344 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:53.602 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:53.602 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:53.602 [708/743] Linking target app/dpdk-dumpcap 00:02:53.602 [709/743] Linking target app/dpdk-pdump 00:02:53.602 [710/743] Linking target app/dpdk-proc-info 00:02:53.861 [711/743] Linking target app/dpdk-test-acl 00:02:53.861 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:54.119 [713/743] Linking target app/dpdk-test-crypto-perf 00:02:54.119 [714/743] Linking target app/dpdk-test-bbdev 00:02:54.119 [715/743] Linking target app/dpdk-test-cmdline 00:02:54.119 [716/743] Linking target app/dpdk-test-compress-perf 00:02:54.119 [717/743] Linking target app/dpdk-test-eventdev 00:02:54.377 [718/743] Linking target app/dpdk-test-fib 00:02:54.377 [719/743] Linking target app/dpdk-test-flow-perf 00:02:54.377 [720/743] Linking target app/dpdk-test-gpudev 00:02:54.377 [721/743] Linking target app/dpdk-test-pipeline 00:02:55.038 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:55.038 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:55.038 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:55.038 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:55.296 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:55.296 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:55.296 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.571 [729/743] Linking target lib/librte_pipeline.so.23.0 00:02:55.571 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:55.832 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:55.832 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:56.090 [733/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:56.090 [734/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:56.090 [735/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:56.348 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:56.348 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:56.608 [738/743] Linking target app/dpdk-test-sad 00:02:56.608 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:56.608 [740/743] Linking target app/dpdk-test-regex 00:02:56.868 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:57.127 [742/743] Linking target app/dpdk-testpmd 00:02:57.127 [743/743] Linking target app/dpdk-test-security-perf 00:02:57.127 05:50:20 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:57.127 05:50:20 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.127 05:50:20 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:57.127 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:57.127 [0/1] Installing files. 00:02:57.698 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.701 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:57.702 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:57.703 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.703 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.703 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.703 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.964 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:57.965 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:57.965 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:57.965 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.965 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:57.965 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.965 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.965 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.965 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.967 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.968 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:57.969 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:57.969 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:57.969 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:57.969 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:57.969 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:57.969 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:57.969 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:57.969 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:57.969 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:57.969 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:57.969 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:57.969 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:57.969 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:57.970 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:57.970 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:57.970 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:57.970 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:57.970 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:57.970 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:57.970 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:57.970 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:57.970 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:57.970 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:57.970 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:57.970 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:57.970 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:57.970 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:57.970 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:57.970 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:57.970 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:57.970 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:57.970 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:57.970 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:57.970 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:57.970 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:57.970 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:57.970 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:57.970 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:57.970 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:57.970 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:57.970 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:57.970 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:57.970 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:57.970 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:57.970 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:57.970 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:57.970 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:57.970 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:57.970 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:57.970 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:57.970 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:57.970 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:57.970 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:57.970 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:57.970 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:57.970 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:57.970 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:57.970 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:57.970 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:57.970 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:57.970 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:57.970 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:57.970 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:57.970 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:57.970 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:57.970 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:57.970 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:57.970 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:57.970 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:57.970 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:57.970 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:57.970 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:57.970 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:57.970 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:57.970 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:57.970 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:57.970 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:57.970 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:57.970 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:57.970 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:57.970 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:57.970 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:57.970 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:57.970 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:57.970 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:57.971 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:57.971 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:57.971 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:57.971 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:57.971 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:57.971 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:57.971 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:57.971 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:57.971 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:57.971 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:57.971 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:57.971 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:57.971 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:57.971 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:57.971 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:57.971 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:57.971 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:57.971 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:57.971 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:57.971 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:57.971 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:57.971 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:57.971 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:57.971 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:57.971 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:57.971 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:57.971 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:57.971 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:57.971 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:57.971 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:57.971 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:57.971 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:57.971 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:57.971 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:57.971 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:57.971 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:57.971 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:57.971 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:57.971 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:57.971 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:57.971 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:58.230 05:50:21 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:58.230 ************************************ 00:02:58.230 END TEST build_native_dpdk 00:02:58.230 05:50:21 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:58.230 00:02:58.230 real 0m51.419s 00:02:58.230 user 6m7.636s 00:02:58.230 sys 0m54.649s 00:02:58.230 05:50:21 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:58.230 05:50:21 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.230 ************************************ 00:02:58.230 05:50:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.230 05:50:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.230 05:50:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.230 05:50:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.230 05:50:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.230 05:50:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.230 05:50:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.230 05:50:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:58.230 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:58.489 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.489 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:58.489 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:58.748 Using 'verbs' RDMA provider 00:03:12.357 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:27.352 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:27.352 Creating mk/config.mk...done. 00:03:27.352 Creating mk/cc.flags.mk...done. 00:03:27.352 Type 'make' to build. 00:03:27.352 05:50:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:27.352 05:50:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:27.352 05:50:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:27.352 05:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.352 ************************************ 00:03:27.352 START TEST make 00:03:27.352 ************************************ 00:03:27.352 05:50:48 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:27.352 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:27.352 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:27.352 meson setup builddir \ 00:03:27.352 -Dwith-libaio=enabled \ 00:03:27.352 -Dwith-liburing=enabled \ 00:03:27.352 -Dwith-libvfn=disabled \ 00:03:27.352 -Dwith-spdk=false && \ 00:03:27.352 meson compile -C builddir && \ 00:03:27.352 cd -) 00:03:27.352 make[1]: Nothing to be done for 'all'. 00:03:28.724 The Meson build system 00:03:28.724 Version: 1.5.0 00:03:28.724 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:28.724 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:28.724 Build type: native build 00:03:28.724 Project name: xnvme 00:03:28.724 Project version: 0.7.3 00:03:28.724 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:28.724 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:28.724 Host machine cpu family: x86_64 00:03:28.724 Host machine cpu: x86_64 00:03:28.724 Message: host_machine.system: linux 00:03:28.724 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:28.724 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:28.724 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:28.724 Run-time dependency threads found: YES 00:03:28.724 Has header "setupapi.h" : NO 00:03:28.724 Has header "linux/blkzoned.h" : YES 00:03:28.724 Has header "linux/blkzoned.h" : YES (cached) 00:03:28.724 Has header "libaio.h" : YES 00:03:28.724 Library aio found: YES 00:03:28.724 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:28.724 Run-time dependency liburing found: YES 2.2 00:03:28.724 Dependency libvfn skipped: feature with-libvfn disabled 00:03:28.724 Run-time dependency appleframeworks found: NO (tried framework) 00:03:28.724 Run-time dependency appleframeworks found: NO (tried framework) 00:03:28.724 Configuring xnvme_config.h using configuration 00:03:28.724 Configuring xnvme.spec using configuration 00:03:28.724 Run-time dependency bash-completion found: YES 2.11 00:03:28.724 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:28.724 Program cp found: YES (/usr/bin/cp) 00:03:28.724 Has header "winsock2.h" : NO 00:03:28.724 Has header "dbghelp.h" : NO 00:03:28.724 Library rpcrt4 found: NO 00:03:28.724 Library rt found: YES 00:03:28.724 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:28.724 Found CMake: /usr/bin/cmake (3.27.7) 00:03:28.724 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:28.724 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:28.724 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:28.724 Build targets in project: 32 00:03:28.724 00:03:28.724 xnvme 0.7.3 00:03:28.724 00:03:28.724 User defined options 00:03:28.724 with-libaio : enabled 00:03:28.724 with-liburing: enabled 00:03:28.724 with-libvfn : disabled 00:03:28.724 with-spdk : false 00:03:28.724 00:03:28.724 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:29.290 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:29.290 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:29.547 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:29.547 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:29.547 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:29.547 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:29.547 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:29.547 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:29.547 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:29.547 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:29.547 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:29.547 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:29.547 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:29.547 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:29.547 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:29.547 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:29.547 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:29.547 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:29.814 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:29.814 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:29.815 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:29.815 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:29.815 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:29.815 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:29.815 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:29.815 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:29.815 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:29.815 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:29.815 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:29.815 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:29.815 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:29.815 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:29.815 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:29.815 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:29.815 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:29.815 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:29.815 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:29.815 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:29.815 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:29.815 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:29.815 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:29.815 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:29.815 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:30.072 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:30.072 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:30.072 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:30.072 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:30.072 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:30.072 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:30.072 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:30.072 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:30.072 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:30.072 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:30.072 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:30.072 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:30.072 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:30.072 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:30.072 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:30.072 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:30.072 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:30.072 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:30.072 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:30.072 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:30.072 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:30.330 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:30.330 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:30.330 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:30.330 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:30.330 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:30.330 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:30.330 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:30.330 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:30.330 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:30.330 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:30.330 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:30.330 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:30.330 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:30.330 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:30.586 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:30.586 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:30.586 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:30.586 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:30.586 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:30.586 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:30.586 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:30.586 [85/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:30.586 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:30.586 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:30.586 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:30.586 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:30.586 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:30.586 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:30.843 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:30.843 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:30.844 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:30.844 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:30.844 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:30.844 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:30.844 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:30.844 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:30.844 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:30.844 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:30.844 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:30.844 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:30.844 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:30.844 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:30.844 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:30.844 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:30.844 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:30.844 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:30.844 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:30.844 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:30.844 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:30.844 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:30.844 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:30.844 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:30.844 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:30.844 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:30.844 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:30.844 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:30.844 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:30.844 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:31.102 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:31.102 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:31.102 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:31.102 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:31.102 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:31.102 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:31.102 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:31.102 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:31.102 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:31.102 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:31.102 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:31.102 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:31.102 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:31.102 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:31.359 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:31.359 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:31.359 [138/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:31.359 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:31.359 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:31.359 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:31.359 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:31.359 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:31.359 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:31.359 [145/203] Linking target lib/libxnvme.so 00:03:31.359 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:31.359 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:31.359 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:31.617 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:31.617 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:31.617 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:31.617 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:31.617 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:31.617 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:31.617 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:31.617 [156/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:31.617 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:31.617 [158/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:31.617 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:31.891 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:31.891 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:31.891 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:31.891 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:31.891 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:31.891 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:31.891 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:31.891 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:31.891 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:31.891 [169/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:31.891 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:32.171 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:32.171 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:32.171 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:32.171 [174/203] Linking static target lib/libxnvme.a 00:03:32.171 [175/203] Linking target tests/xnvme_tests_lblk 00:03:32.171 [176/203] Linking target tests/xnvme_tests_buf 00:03:32.171 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:32.171 [178/203] Linking target tests/xnvme_tests_cli 00:03:32.171 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:03:32.171 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:32.171 [181/203] Linking target tests/xnvme_tests_scc 00:03:32.171 [182/203] Linking target tests/xnvme_tests_ioworker 00:03:32.171 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:32.171 [184/203] Linking target tests/xnvme_tests_enum 00:03:32.171 [185/203] Linking target tests/xnvme_tests_znd_append 00:03:32.171 [186/203] Linking target tools/xnvme 00:03:32.171 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:32.171 [188/203] Linking target tests/xnvme_tests_znd_state 00:03:32.171 [189/203] Linking target tools/xdd 00:03:32.171 [190/203] Linking target tests/xnvme_tests_map 00:03:32.171 [191/203] Linking target tools/xnvme_file 00:03:32.171 [192/203] Linking target tools/zoned 00:03:32.433 [193/203] Linking target tests/xnvme_tests_kvs 00:03:32.433 [194/203] Linking target tools/lblk 00:03:32.433 [195/203] Linking target examples/xnvme_dev 00:03:32.433 [196/203] Linking target tools/kvs 00:03:32.433 [197/203] Linking target examples/xnvme_io_async 00:03:32.433 [198/203] Linking target examples/xnvme_enum 00:03:32.433 [199/203] Linking target examples/xnvme_single_async 00:03:32.433 [200/203] Linking target examples/zoned_io_async 00:03:32.433 [201/203] Linking target examples/zoned_io_sync 00:03:32.433 [202/203] Linking target examples/xnvme_single_sync 00:03:32.433 [203/203] Linking target examples/xnvme_hello 00:03:32.433 INFO: autodetecting backend as ninja 00:03:32.433 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:32.433 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:19.102 CC lib/log/log.o 00:04:19.102 CC lib/log/log_flags.o 00:04:19.102 CC lib/log/log_deprecated.o 00:04:19.102 CC lib/ut/ut.o 00:04:19.102 CC lib/ut_mock/mock.o 00:04:19.102 LIB libspdk_ut_mock.a 00:04:19.102 LIB libspdk_log.a 00:04:19.102 SO libspdk_ut_mock.so.6.0 00:04:19.102 LIB libspdk_ut.a 00:04:19.102 SO libspdk_log.so.7.0 00:04:19.102 SO libspdk_ut.so.2.0 00:04:19.102 SYMLINK libspdk_ut_mock.so 00:04:19.102 SYMLINK libspdk_log.so 00:04:19.102 SYMLINK libspdk_ut.so 00:04:19.102 CXX lib/trace_parser/trace.o 00:04:19.102 CC lib/util/base64.o 00:04:19.102 CC lib/util/bit_array.o 00:04:19.102 CC lib/util/cpuset.o 00:04:19.102 CC lib/util/crc16.o 00:04:19.102 CC lib/dma/dma.o 00:04:19.102 CC lib/util/crc32.o 00:04:19.102 CC lib/util/crc32c.o 00:04:19.102 CC lib/ioat/ioat.o 00:04:19.102 CC lib/util/crc32_ieee.o 00:04:19.102 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.102 CC lib/util/crc64.o 00:04:19.102 CC lib/util/dif.o 00:04:19.102 CC lib/util/fd.o 00:04:19.102 LIB libspdk_dma.a 00:04:19.102 CC lib/util/fd_group.o 00:04:19.102 CC lib/util/file.o 00:04:19.102 SO libspdk_dma.so.5.0 00:04:19.102 CC lib/vfio_user/host/vfio_user.o 00:04:19.102 SYMLINK libspdk_dma.so 00:04:19.102 CC lib/util/hexlify.o 00:04:19.102 CC lib/util/iov.o 00:04:19.102 LIB libspdk_ioat.a 00:04:19.102 SO libspdk_ioat.so.7.0 00:04:19.102 CC lib/util/math.o 00:04:19.102 CC lib/util/net.o 00:04:19.359 SYMLINK libspdk_ioat.so 00:04:19.359 CC lib/util/pipe.o 00:04:19.359 CC lib/util/strerror_tls.o 00:04:19.359 CC lib/util/string.o 00:04:19.359 CC lib/util/uuid.o 00:04:19.359 CC lib/util/xor.o 00:04:19.359 LIB libspdk_vfio_user.a 00:04:19.359 CC lib/util/zipf.o 00:04:19.359 SO libspdk_vfio_user.so.5.0 00:04:19.359 CC lib/util/md5.o 00:04:19.359 SYMLINK libspdk_vfio_user.so 00:04:19.617 LIB libspdk_util.a 00:04:19.874 SO libspdk_util.so.10.0 00:04:19.874 LIB libspdk_trace_parser.a 00:04:19.874 SYMLINK libspdk_util.so 00:04:19.874 SO libspdk_trace_parser.so.6.0 00:04:20.133 SYMLINK libspdk_trace_parser.so 00:04:20.133 CC lib/json/json_parse.o 00:04:20.133 CC lib/vmd/vmd.o 00:04:20.133 CC lib/conf/conf.o 00:04:20.133 CC lib/json/json_util.o 00:04:20.133 CC lib/vmd/led.o 00:04:20.133 CC lib/json/json_write.o 00:04:20.133 CC lib/rdma_utils/rdma_utils.o 00:04:20.133 CC lib/rdma_provider/common.o 00:04:20.133 CC lib/idxd/idxd.o 00:04:20.133 CC lib/env_dpdk/env.o 00:04:20.391 CC lib/idxd/idxd_user.o 00:04:20.391 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:20.391 LIB libspdk_conf.a 00:04:20.391 CC lib/idxd/idxd_kernel.o 00:04:20.391 CC lib/env_dpdk/memory.o 00:04:20.391 SO libspdk_conf.so.6.0 00:04:20.391 LIB libspdk_rdma_utils.a 00:04:20.391 SO libspdk_rdma_utils.so.1.0 00:04:20.391 LIB libspdk_json.a 00:04:20.648 SYMLINK libspdk_conf.so 00:04:20.648 SO libspdk_json.so.6.0 00:04:20.648 CC lib/env_dpdk/pci.o 00:04:20.648 SYMLINK libspdk_rdma_utils.so 00:04:20.648 CC lib/env_dpdk/init.o 00:04:20.648 LIB libspdk_rdma_provider.a 00:04:20.648 SO libspdk_rdma_provider.so.6.0 00:04:20.648 CC lib/env_dpdk/threads.o 00:04:20.648 CC lib/env_dpdk/pci_ioat.o 00:04:20.648 SYMLINK libspdk_json.so 00:04:20.648 SYMLINK libspdk_rdma_provider.so 00:04:20.648 CC lib/env_dpdk/pci_virtio.o 00:04:20.905 CC lib/env_dpdk/pci_vmd.o 00:04:20.905 CC lib/env_dpdk/pci_idxd.o 00:04:20.905 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.905 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.905 CC lib/env_dpdk/pci_event.o 00:04:20.905 CC lib/env_dpdk/sigbus_handler.o 00:04:20.905 LIB libspdk_idxd.a 00:04:20.905 CC lib/env_dpdk/pci_dpdk.o 00:04:20.905 LIB libspdk_vmd.a 00:04:20.905 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:21.162 SO libspdk_idxd.so.12.1 00:04:21.162 CC lib/jsonrpc/jsonrpc_client.o 00:04:21.162 SO libspdk_vmd.so.6.0 00:04:21.162 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:21.162 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:21.162 SYMLINK libspdk_idxd.so 00:04:21.162 SYMLINK libspdk_vmd.so 00:04:21.420 LIB libspdk_jsonrpc.a 00:04:21.421 SO libspdk_jsonrpc.so.6.0 00:04:21.421 SYMLINK libspdk_jsonrpc.so 00:04:21.677 CC lib/rpc/rpc.o 00:04:21.934 LIB libspdk_rpc.a 00:04:21.934 SO libspdk_rpc.so.6.0 00:04:22.191 SYMLINK libspdk_rpc.so 00:04:22.191 LIB libspdk_env_dpdk.a 00:04:22.191 SO libspdk_env_dpdk.so.15.0 00:04:22.191 CC lib/keyring/keyring.o 00:04:22.191 CC lib/keyring/keyring_rpc.o 00:04:22.191 CC lib/notify/notify.o 00:04:22.191 CC lib/notify/notify_rpc.o 00:04:22.191 CC lib/trace/trace.o 00:04:22.191 CC lib/trace/trace_flags.o 00:04:22.191 CC lib/trace/trace_rpc.o 00:04:22.448 SYMLINK libspdk_env_dpdk.so 00:04:22.448 LIB libspdk_notify.a 00:04:22.448 SO libspdk_notify.so.6.0 00:04:22.706 SYMLINK libspdk_notify.so 00:04:22.706 LIB libspdk_keyring.a 00:04:22.706 LIB libspdk_trace.a 00:04:22.706 SO libspdk_keyring.so.2.0 00:04:22.706 SO libspdk_trace.so.11.0 00:04:22.706 SYMLINK libspdk_keyring.so 00:04:22.706 SYMLINK libspdk_trace.so 00:04:22.964 CC lib/thread/thread.o 00:04:22.964 CC lib/thread/iobuf.o 00:04:22.964 CC lib/sock/sock_rpc.o 00:04:22.964 CC lib/sock/sock.o 00:04:23.528 LIB libspdk_sock.a 00:04:23.528 SO libspdk_sock.so.10.0 00:04:23.785 SYMLINK libspdk_sock.so 00:04:24.043 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:24.043 CC lib/nvme/nvme_ctrlr.o 00:04:24.043 CC lib/nvme/nvme_fabric.o 00:04:24.043 CC lib/nvme/nvme_ns_cmd.o 00:04:24.043 CC lib/nvme/nvme_ns.o 00:04:24.043 CC lib/nvme/nvme_pcie_common.o 00:04:24.043 CC lib/nvme/nvme_pcie.o 00:04:24.043 CC lib/nvme/nvme.o 00:04:24.043 CC lib/nvme/nvme_qpair.o 00:04:24.977 CC lib/nvme/nvme_quirks.o 00:04:24.977 CC lib/nvme/nvme_transport.o 00:04:24.977 CC lib/nvme/nvme_discovery.o 00:04:24.977 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:24.977 LIB libspdk_thread.a 00:04:24.977 SO libspdk_thread.so.10.1 00:04:24.977 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:24.977 CC lib/nvme/nvme_tcp.o 00:04:24.977 CC lib/nvme/nvme_opal.o 00:04:25.235 SYMLINK libspdk_thread.so 00:04:25.235 CC lib/nvme/nvme_io_msg.o 00:04:25.235 CC lib/nvme/nvme_poll_group.o 00:04:25.493 CC lib/nvme/nvme_zns.o 00:04:25.493 CC lib/nvme/nvme_stubs.o 00:04:25.493 CC lib/nvme/nvme_auth.o 00:04:25.752 CC lib/nvme/nvme_cuse.o 00:04:26.010 CC lib/accel/accel.o 00:04:26.010 CC lib/nvme/nvme_rdma.o 00:04:26.010 CC lib/blob/blobstore.o 00:04:26.010 CC lib/init/json_config.o 00:04:26.268 CC lib/init/subsystem.o 00:04:26.268 CC lib/virtio/virtio.o 00:04:26.268 CC lib/virtio/virtio_vhost_user.o 00:04:26.268 CC lib/init/subsystem_rpc.o 00:04:26.526 CC lib/init/rpc.o 00:04:26.526 CC lib/virtio/virtio_vfio_user.o 00:04:26.784 CC lib/virtio/virtio_pci.o 00:04:26.784 LIB libspdk_init.a 00:04:26.784 CC lib/blob/request.o 00:04:26.784 SO libspdk_init.so.6.0 00:04:26.784 CC lib/blob/zeroes.o 00:04:26.784 SYMLINK libspdk_init.so 00:04:26.784 CC lib/blob/blob_bs_dev.o 00:04:27.043 CC lib/fsdev/fsdev.o 00:04:27.043 CC lib/accel/accel_rpc.o 00:04:27.043 LIB libspdk_virtio.a 00:04:27.043 CC lib/fsdev/fsdev_io.o 00:04:27.043 SO libspdk_virtio.so.7.0 00:04:27.043 CC lib/event/app.o 00:04:27.043 CC lib/accel/accel_sw.o 00:04:27.043 CC lib/fsdev/fsdev_rpc.o 00:04:27.301 SYMLINK libspdk_virtio.so 00:04:27.301 CC lib/event/reactor.o 00:04:27.301 CC lib/event/log_rpc.o 00:04:27.301 CC lib/event/app_rpc.o 00:04:27.559 CC lib/event/scheduler_static.o 00:04:27.559 LIB libspdk_accel.a 00:04:27.559 SO libspdk_accel.so.16.0 00:04:27.818 SYMLINK libspdk_accel.so 00:04:27.818 LIB libspdk_event.a 00:04:27.818 LIB libspdk_fsdev.a 00:04:27.818 SO libspdk_fsdev.so.1.0 00:04:27.818 SO libspdk_event.so.14.0 00:04:27.818 LIB libspdk_nvme.a 00:04:27.818 SYMLINK libspdk_fsdev.so 00:04:27.818 SYMLINK libspdk_event.so 00:04:27.818 CC lib/bdev/bdev.o 00:04:27.818 CC lib/bdev/bdev_zone.o 00:04:27.818 CC lib/bdev/bdev_rpc.o 00:04:27.818 CC lib/bdev/part.o 00:04:27.818 CC lib/bdev/scsi_nvme.o 00:04:28.077 SO libspdk_nvme.so.14.0 00:04:28.077 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:28.336 SYMLINK libspdk_nvme.so 00:04:28.904 LIB libspdk_fuse_dispatcher.a 00:04:28.904 SO libspdk_fuse_dispatcher.so.1.0 00:04:29.163 SYMLINK libspdk_fuse_dispatcher.so 00:04:30.543 LIB libspdk_blob.a 00:04:30.543 SO libspdk_blob.so.11.0 00:04:30.543 SYMLINK libspdk_blob.so 00:04:30.802 CC lib/blobfs/blobfs.o 00:04:30.802 CC lib/blobfs/tree.o 00:04:30.802 CC lib/lvol/lvol.o 00:04:31.741 LIB libspdk_bdev.a 00:04:31.741 SO libspdk_bdev.so.16.0 00:04:31.741 SYMLINK libspdk_bdev.so 00:04:31.999 CC lib/nvmf/ctrlr.o 00:04:31.999 CC lib/scsi/dev.o 00:04:31.999 CC lib/nvmf/ctrlr_discovery.o 00:04:31.999 CC lib/scsi/lun.o 00:04:31.999 CC lib/nbd/nbd.o 00:04:31.999 CC lib/nvmf/ctrlr_bdev.o 00:04:31.999 CC lib/ublk/ublk.o 00:04:31.999 CC lib/ftl/ftl_core.o 00:04:31.999 LIB libspdk_blobfs.a 00:04:32.256 SO libspdk_blobfs.so.10.0 00:04:32.256 LIB libspdk_lvol.a 00:04:32.256 SYMLINK libspdk_blobfs.so 00:04:32.256 CC lib/ftl/ftl_init.o 00:04:32.256 SO libspdk_lvol.so.10.0 00:04:32.256 SYMLINK libspdk_lvol.so 00:04:32.256 CC lib/nbd/nbd_rpc.o 00:04:32.256 CC lib/ublk/ublk_rpc.o 00:04:32.514 CC lib/scsi/port.o 00:04:32.514 CC lib/ftl/ftl_layout.o 00:04:32.514 CC lib/ftl/ftl_debug.o 00:04:32.514 CC lib/scsi/scsi.o 00:04:32.514 LIB libspdk_nbd.a 00:04:32.514 CC lib/scsi/scsi_bdev.o 00:04:32.514 SO libspdk_nbd.so.7.0 00:04:32.771 CC lib/scsi/scsi_pr.o 00:04:32.771 SYMLINK libspdk_nbd.so 00:04:32.771 CC lib/scsi/scsi_rpc.o 00:04:32.771 CC lib/nvmf/subsystem.o 00:04:32.771 CC lib/scsi/task.o 00:04:32.771 CC lib/ftl/ftl_io.o 00:04:32.771 CC lib/nvmf/nvmf.o 00:04:32.771 CC lib/ftl/ftl_sb.o 00:04:33.029 LIB libspdk_ublk.a 00:04:33.029 SO libspdk_ublk.so.3.0 00:04:33.029 CC lib/nvmf/nvmf_rpc.o 00:04:33.029 CC lib/nvmf/transport.o 00:04:33.029 SYMLINK libspdk_ublk.so 00:04:33.029 CC lib/ftl/ftl_l2p.o 00:04:33.029 CC lib/nvmf/tcp.o 00:04:33.029 CC lib/ftl/ftl_l2p_flat.o 00:04:33.029 CC lib/nvmf/stubs.o 00:04:33.291 LIB libspdk_scsi.a 00:04:33.291 SO libspdk_scsi.so.9.0 00:04:33.291 CC lib/ftl/ftl_nv_cache.o 00:04:33.291 CC lib/ftl/ftl_band.o 00:04:33.291 SYMLINK libspdk_scsi.so 00:04:33.291 CC lib/ftl/ftl_band_ops.o 00:04:33.598 CC lib/nvmf/mdns_server.o 00:04:33.889 CC lib/nvmf/rdma.o 00:04:33.889 CC lib/nvmf/auth.o 00:04:34.147 CC lib/iscsi/conn.o 00:04:34.147 CC lib/vhost/vhost.o 00:04:34.147 CC lib/iscsi/init_grp.o 00:04:34.147 CC lib/iscsi/iscsi.o 00:04:34.405 CC lib/ftl/ftl_writer.o 00:04:34.405 CC lib/iscsi/param.o 00:04:34.405 CC lib/vhost/vhost_rpc.o 00:04:34.663 CC lib/vhost/vhost_scsi.o 00:04:34.663 CC lib/ftl/ftl_rq.o 00:04:34.923 CC lib/ftl/ftl_reloc.o 00:04:34.923 CC lib/iscsi/portal_grp.o 00:04:34.923 CC lib/iscsi/tgt_node.o 00:04:34.923 CC lib/ftl/ftl_l2p_cache.o 00:04:35.182 CC lib/iscsi/iscsi_subsystem.o 00:04:35.182 CC lib/ftl/ftl_p2l.o 00:04:35.182 CC lib/vhost/vhost_blk.o 00:04:35.182 CC lib/ftl/ftl_p2l_log.o 00:04:35.182 CC lib/ftl/mngt/ftl_mngt.o 00:04:35.441 CC lib/vhost/rte_vhost_user.o 00:04:35.699 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:35.699 CC lib/iscsi/iscsi_rpc.o 00:04:35.699 CC lib/iscsi/task.o 00:04:35.699 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:35.699 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:35.699 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:35.957 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:35.957 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:35.957 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.957 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:35.957 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:35.957 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.215 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.215 LIB libspdk_iscsi.a 00:04:36.215 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:36.215 SO libspdk_iscsi.so.8.0 00:04:36.215 CC lib/ftl/utils/ftl_conf.o 00:04:36.215 CC lib/ftl/utils/ftl_md.o 00:04:36.215 CC lib/ftl/utils/ftl_mempool.o 00:04:36.215 CC lib/ftl/utils/ftl_bitmap.o 00:04:36.215 CC lib/ftl/utils/ftl_property.o 00:04:36.473 SYMLINK libspdk_iscsi.so 00:04:36.473 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:36.473 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:36.473 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:36.473 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:36.473 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:36.732 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:36.732 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:36.732 LIB libspdk_vhost.a 00:04:36.732 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:36.732 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:36.732 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:36.732 SO libspdk_vhost.so.8.0 00:04:36.732 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:36.732 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:36.732 SYMLINK libspdk_vhost.so 00:04:36.732 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:36.732 LIB libspdk_nvmf.a 00:04:36.732 CC lib/ftl/base/ftl_base_dev.o 00:04:36.732 CC lib/ftl/base/ftl_base_bdev.o 00:04:36.732 CC lib/ftl/ftl_trace.o 00:04:36.990 SO libspdk_nvmf.so.19.0 00:04:37.248 LIB libspdk_ftl.a 00:04:37.248 SYMLINK libspdk_nvmf.so 00:04:37.506 SO libspdk_ftl.so.9.0 00:04:37.764 SYMLINK libspdk_ftl.so 00:04:38.023 CC module/env_dpdk/env_dpdk_rpc.o 00:04:38.023 CC module/accel/error/accel_error.o 00:04:38.023 CC module/blob/bdev/blob_bdev.o 00:04:38.023 CC module/accel/iaa/accel_iaa.o 00:04:38.023 CC module/sock/posix/posix.o 00:04:38.023 CC module/accel/ioat/accel_ioat.o 00:04:38.023 CC module/keyring/file/keyring.o 00:04:38.023 CC module/accel/dsa/accel_dsa.o 00:04:38.023 CC module/fsdev/aio/fsdev_aio.o 00:04:38.023 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.303 LIB libspdk_env_dpdk_rpc.a 00:04:38.303 SO libspdk_env_dpdk_rpc.so.6.0 00:04:38.303 SYMLINK libspdk_env_dpdk_rpc.so 00:04:38.303 CC module/keyring/file/keyring_rpc.o 00:04:38.303 CC module/accel/error/accel_error_rpc.o 00:04:38.303 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.303 CC module/accel/iaa/accel_iaa_rpc.o 00:04:38.303 LIB libspdk_scheduler_dynamic.a 00:04:38.303 SO libspdk_scheduler_dynamic.so.4.0 00:04:38.561 LIB libspdk_blob_bdev.a 00:04:38.561 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:38.561 LIB libspdk_keyring_file.a 00:04:38.561 CC module/accel/dsa/accel_dsa_rpc.o 00:04:38.561 SYMLINK libspdk_scheduler_dynamic.so 00:04:38.561 SO libspdk_blob_bdev.so.11.0 00:04:38.561 LIB libspdk_accel_error.a 00:04:38.561 SO libspdk_keyring_file.so.2.0 00:04:38.561 LIB libspdk_accel_ioat.a 00:04:38.561 LIB libspdk_accel_iaa.a 00:04:38.561 SO libspdk_accel_ioat.so.6.0 00:04:38.561 SO libspdk_accel_error.so.2.0 00:04:38.561 SO libspdk_accel_iaa.so.3.0 00:04:38.561 SYMLINK libspdk_blob_bdev.so 00:04:38.561 SYMLINK libspdk_keyring_file.so 00:04:38.561 SYMLINK libspdk_accel_error.so 00:04:38.561 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:38.561 SYMLINK libspdk_accel_ioat.so 00:04:38.561 CC module/fsdev/aio/linux_aio_mgr.o 00:04:38.561 LIB libspdk_accel_dsa.a 00:04:38.561 SYMLINK libspdk_accel_iaa.so 00:04:38.561 LIB libspdk_scheduler_dpdk_governor.a 00:04:38.819 SO libspdk_accel_dsa.so.5.0 00:04:38.819 CC module/scheduler/gscheduler/gscheduler.o 00:04:38.819 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:38.819 SYMLINK libspdk_accel_dsa.so 00:04:38.819 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:38.819 CC module/keyring/linux/keyring.o 00:04:38.819 CC module/keyring/linux/keyring_rpc.o 00:04:38.819 LIB libspdk_scheduler_gscheduler.a 00:04:38.819 CC module/bdev/delay/vbdev_delay.o 00:04:38.819 SO libspdk_scheduler_gscheduler.so.4.0 00:04:38.819 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.077 CC module/bdev/error/vbdev_error.o 00:04:39.077 CC module/bdev/gpt/gpt.o 00:04:39.077 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.077 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.077 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.077 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.077 LIB libspdk_fsdev_aio.a 00:04:39.077 LIB libspdk_keyring_linux.a 00:04:39.077 SO libspdk_fsdev_aio.so.1.0 00:04:39.077 SO libspdk_keyring_linux.so.1.0 00:04:39.077 LIB libspdk_sock_posix.a 00:04:39.077 SYMLINK libspdk_keyring_linux.so 00:04:39.077 SO libspdk_sock_posix.so.6.0 00:04:39.077 SYMLINK libspdk_fsdev_aio.so 00:04:39.077 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.077 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.334 SYMLINK libspdk_sock_posix.so 00:04:39.334 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.334 LIB libspdk_bdev_error.a 00:04:39.334 CC module/bdev/malloc/bdev_malloc.o 00:04:39.334 SO libspdk_bdev_error.so.6.0 00:04:39.334 LIB libspdk_bdev_gpt.a 00:04:39.334 LIB libspdk_blobfs_bdev.a 00:04:39.334 CC module/bdev/nvme/bdev_nvme.o 00:04:39.334 SO libspdk_bdev_gpt.so.6.0 00:04:39.334 CC module/bdev/null/bdev_null.o 00:04:39.334 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:39.334 SO libspdk_blobfs_bdev.so.6.0 00:04:39.334 LIB libspdk_bdev_delay.a 00:04:39.334 SYMLINK libspdk_bdev_error.so 00:04:39.334 SO libspdk_bdev_delay.so.6.0 00:04:39.334 SYMLINK libspdk_bdev_gpt.so 00:04:39.334 SYMLINK libspdk_blobfs_bdev.so 00:04:39.334 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:39.591 SYMLINK libspdk_bdev_delay.so 00:04:39.591 CC module/bdev/passthru/vbdev_passthru.o 00:04:39.592 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.592 CC module/bdev/raid/bdev_raid.o 00:04:39.592 CC module/bdev/split/vbdev_split.o 00:04:39.592 CC module/bdev/null/bdev_null_rpc.o 00:04:39.592 LIB libspdk_bdev_lvol.a 00:04:39.592 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:39.849 SO libspdk_bdev_lvol.so.6.0 00:04:39.849 LIB libspdk_bdev_malloc.a 00:04:39.849 CC module/bdev/nvme/nvme_rpc.o 00:04:39.849 SYMLINK libspdk_bdev_lvol.so 00:04:39.849 CC module/bdev/nvme/bdev_mdns_client.o 00:04:39.849 SO libspdk_bdev_malloc.so.6.0 00:04:39.849 LIB libspdk_bdev_null.a 00:04:39.849 SYMLINK libspdk_bdev_malloc.so 00:04:39.849 CC module/bdev/nvme/vbdev_opal.o 00:04:39.849 SO libspdk_bdev_null.so.6.0 00:04:39.849 CC module/bdev/split/vbdev_split_rpc.o 00:04:39.849 LIB libspdk_bdev_passthru.a 00:04:40.107 SO libspdk_bdev_passthru.so.6.0 00:04:40.107 SYMLINK libspdk_bdev_null.so 00:04:40.107 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.107 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.107 SYMLINK libspdk_bdev_passthru.so 00:04:40.107 LIB libspdk_bdev_split.a 00:04:40.107 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.107 SO libspdk_bdev_split.so.6.0 00:04:40.107 CC module/bdev/xnvme/bdev_xnvme.o 00:04:40.107 SYMLINK libspdk_bdev_split.so 00:04:40.107 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.366 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.366 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:40.366 CC module/bdev/raid/raid0.o 00:04:40.366 CC module/bdev/aio/bdev_aio.o 00:04:40.366 LIB libspdk_bdev_zone_block.a 00:04:40.366 SO libspdk_bdev_zone_block.so.6.0 00:04:40.366 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.624 SYMLINK libspdk_bdev_zone_block.so 00:04:40.624 CC module/bdev/ftl/bdev_ftl.o 00:04:40.624 LIB libspdk_bdev_xnvme.a 00:04:40.624 SO libspdk_bdev_xnvme.so.3.0 00:04:40.624 CC module/bdev/raid/raid1.o 00:04:40.624 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.624 CC module/bdev/raid/concat.o 00:04:40.624 SYMLINK libspdk_bdev_xnvme.so 00:04:40.624 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.624 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.624 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.624 LIB libspdk_bdev_aio.a 00:04:40.624 SO libspdk_bdev_aio.so.6.0 00:04:40.882 SYMLINK libspdk_bdev_aio.so 00:04:40.882 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.882 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.882 LIB libspdk_bdev_ftl.a 00:04:40.882 SO libspdk_bdev_ftl.so.6.0 00:04:40.882 LIB libspdk_bdev_raid.a 00:04:40.882 SYMLINK libspdk_bdev_ftl.so 00:04:41.139 SO libspdk_bdev_raid.so.6.0 00:04:41.140 SYMLINK libspdk_bdev_raid.so 00:04:41.140 LIB libspdk_bdev_iscsi.a 00:04:41.140 SO libspdk_bdev_iscsi.so.6.0 00:04:41.140 SYMLINK libspdk_bdev_iscsi.so 00:04:41.398 LIB libspdk_bdev_virtio.a 00:04:41.398 SO libspdk_bdev_virtio.so.6.0 00:04:41.398 SYMLINK libspdk_bdev_virtio.so 00:04:42.333 LIB libspdk_bdev_nvme.a 00:04:42.333 SO libspdk_bdev_nvme.so.7.0 00:04:42.592 SYMLINK libspdk_bdev_nvme.so 00:04:42.848 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.848 CC module/event/subsystems/vmd/vmd.o 00:04:42.848 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.848 CC module/event/subsystems/sock/sock.o 00:04:43.104 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:43.104 CC module/event/subsystems/keyring/keyring.o 00:04:43.104 CC module/event/subsystems/scheduler/scheduler.o 00:04:43.104 CC module/event/subsystems/iobuf/iobuf.o 00:04:43.104 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:43.104 LIB libspdk_event_fsdev.a 00:04:43.104 LIB libspdk_event_vmd.a 00:04:43.104 LIB libspdk_event_scheduler.a 00:04:43.104 LIB libspdk_event_sock.a 00:04:43.104 SO libspdk_event_fsdev.so.1.0 00:04:43.104 SO libspdk_event_vmd.so.6.0 00:04:43.104 SO libspdk_event_scheduler.so.4.0 00:04:43.104 SO libspdk_event_sock.so.5.0 00:04:43.104 LIB libspdk_event_vhost_blk.a 00:04:43.104 LIB libspdk_event_keyring.a 00:04:43.104 LIB libspdk_event_iobuf.a 00:04:43.104 SO libspdk_event_keyring.so.1.0 00:04:43.104 SO libspdk_event_vhost_blk.so.3.0 00:04:43.360 SYMLINK libspdk_event_fsdev.so 00:04:43.360 SYMLINK libspdk_event_scheduler.so 00:04:43.360 SO libspdk_event_iobuf.so.3.0 00:04:43.360 SYMLINK libspdk_event_sock.so 00:04:43.360 SYMLINK libspdk_event_vmd.so 00:04:43.360 SYMLINK libspdk_event_keyring.so 00:04:43.360 SYMLINK libspdk_event_vhost_blk.so 00:04:43.360 SYMLINK libspdk_event_iobuf.so 00:04:43.618 CC module/event/subsystems/accel/accel.o 00:04:43.876 LIB libspdk_event_accel.a 00:04:43.876 SO libspdk_event_accel.so.6.0 00:04:43.876 SYMLINK libspdk_event_accel.so 00:04:44.135 CC module/event/subsystems/bdev/bdev.o 00:04:44.393 LIB libspdk_event_bdev.a 00:04:44.393 SO libspdk_event_bdev.so.6.0 00:04:44.393 SYMLINK libspdk_event_bdev.so 00:04:44.652 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.652 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.652 CC module/event/subsystems/scsi/scsi.o 00:04:44.652 CC module/event/subsystems/nbd/nbd.o 00:04:44.652 CC module/event/subsystems/ublk/ublk.o 00:04:44.912 LIB libspdk_event_ublk.a 00:04:44.912 LIB libspdk_event_nbd.a 00:04:44.912 LIB libspdk_event_scsi.a 00:04:44.912 SO libspdk_event_ublk.so.3.0 00:04:44.912 SO libspdk_event_nbd.so.6.0 00:04:44.912 SO libspdk_event_scsi.so.6.0 00:04:44.912 SYMLINK libspdk_event_ublk.so 00:04:44.912 SYMLINK libspdk_event_scsi.so 00:04:44.912 SYMLINK libspdk_event_nbd.so 00:04:44.912 LIB libspdk_event_nvmf.a 00:04:44.912 SO libspdk_event_nvmf.so.6.0 00:04:45.171 SYMLINK libspdk_event_nvmf.so 00:04:45.171 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:45.171 CC module/event/subsystems/iscsi/iscsi.o 00:04:45.430 LIB libspdk_event_vhost_scsi.a 00:04:45.430 LIB libspdk_event_iscsi.a 00:04:45.430 SO libspdk_event_vhost_scsi.so.3.0 00:04:45.430 SO libspdk_event_iscsi.so.6.0 00:04:45.430 SYMLINK libspdk_event_vhost_scsi.so 00:04:45.430 SYMLINK libspdk_event_iscsi.so 00:04:45.689 SO libspdk.so.6.0 00:04:45.689 SYMLINK libspdk.so 00:04:45.948 CXX app/trace/trace.o 00:04:45.948 CC test/rpc_client/rpc_client_test.o 00:04:45.948 TEST_HEADER include/spdk/accel.h 00:04:45.948 TEST_HEADER include/spdk/accel_module.h 00:04:45.948 TEST_HEADER include/spdk/assert.h 00:04:45.948 TEST_HEADER include/spdk/barrier.h 00:04:45.948 TEST_HEADER include/spdk/base64.h 00:04:45.948 TEST_HEADER include/spdk/bdev.h 00:04:45.948 TEST_HEADER include/spdk/bdev_module.h 00:04:45.948 TEST_HEADER include/spdk/bdev_zone.h 00:04:45.948 TEST_HEADER include/spdk/bit_array.h 00:04:45.948 TEST_HEADER include/spdk/bit_pool.h 00:04:45.948 TEST_HEADER include/spdk/blob_bdev.h 00:04:45.948 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:45.948 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:45.948 TEST_HEADER include/spdk/blobfs.h 00:04:45.948 TEST_HEADER include/spdk/blob.h 00:04:45.948 TEST_HEADER include/spdk/conf.h 00:04:45.948 TEST_HEADER include/spdk/config.h 00:04:45.948 TEST_HEADER include/spdk/cpuset.h 00:04:45.948 TEST_HEADER include/spdk/crc16.h 00:04:45.948 TEST_HEADER include/spdk/crc32.h 00:04:45.948 TEST_HEADER include/spdk/crc64.h 00:04:45.948 TEST_HEADER include/spdk/dif.h 00:04:45.948 TEST_HEADER include/spdk/dma.h 00:04:45.948 TEST_HEADER include/spdk/endian.h 00:04:45.948 TEST_HEADER include/spdk/env_dpdk.h 00:04:45.948 TEST_HEADER include/spdk/env.h 00:04:45.948 TEST_HEADER include/spdk/event.h 00:04:45.948 TEST_HEADER include/spdk/fd_group.h 00:04:45.948 TEST_HEADER include/spdk/fd.h 00:04:45.948 TEST_HEADER include/spdk/file.h 00:04:45.948 TEST_HEADER include/spdk/fsdev.h 00:04:45.948 TEST_HEADER include/spdk/fsdev_module.h 00:04:45.948 TEST_HEADER include/spdk/ftl.h 00:04:45.948 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:45.948 TEST_HEADER include/spdk/gpt_spec.h 00:04:45.948 TEST_HEADER include/spdk/hexlify.h 00:04:45.948 TEST_HEADER include/spdk/histogram_data.h 00:04:45.948 TEST_HEADER include/spdk/idxd.h 00:04:45.948 TEST_HEADER include/spdk/idxd_spec.h 00:04:45.948 TEST_HEADER include/spdk/init.h 00:04:45.948 TEST_HEADER include/spdk/ioat.h 00:04:45.948 CC test/thread/poller_perf/poller_perf.o 00:04:45.948 TEST_HEADER include/spdk/ioat_spec.h 00:04:45.948 CC examples/ioat/perf/perf.o 00:04:45.948 TEST_HEADER include/spdk/iscsi_spec.h 00:04:45.948 TEST_HEADER include/spdk/json.h 00:04:45.948 TEST_HEADER include/spdk/jsonrpc.h 00:04:45.948 TEST_HEADER include/spdk/keyring.h 00:04:45.948 TEST_HEADER include/spdk/keyring_module.h 00:04:45.948 TEST_HEADER include/spdk/likely.h 00:04:45.948 CC examples/util/zipf/zipf.o 00:04:45.948 TEST_HEADER include/spdk/log.h 00:04:45.948 TEST_HEADER include/spdk/lvol.h 00:04:45.948 TEST_HEADER include/spdk/md5.h 00:04:45.948 TEST_HEADER include/spdk/memory.h 00:04:45.948 TEST_HEADER include/spdk/mmio.h 00:04:45.948 TEST_HEADER include/spdk/nbd.h 00:04:45.948 TEST_HEADER include/spdk/net.h 00:04:45.948 TEST_HEADER include/spdk/notify.h 00:04:45.948 TEST_HEADER include/spdk/nvme.h 00:04:45.948 CC test/dma/test_dma/test_dma.o 00:04:45.948 TEST_HEADER include/spdk/nvme_intel.h 00:04:45.948 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:45.948 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:45.948 TEST_HEADER include/spdk/nvme_spec.h 00:04:45.948 TEST_HEADER include/spdk/nvme_zns.h 00:04:45.948 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:45.948 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:45.948 TEST_HEADER include/spdk/nvmf.h 00:04:46.207 CC test/app/bdev_svc/bdev_svc.o 00:04:46.207 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.207 TEST_HEADER include/spdk/nvmf_transport.h 00:04:46.207 TEST_HEADER include/spdk/opal.h 00:04:46.207 TEST_HEADER include/spdk/opal_spec.h 00:04:46.207 TEST_HEADER include/spdk/pci_ids.h 00:04:46.207 TEST_HEADER include/spdk/pipe.h 00:04:46.207 TEST_HEADER include/spdk/queue.h 00:04:46.207 TEST_HEADER include/spdk/reduce.h 00:04:46.207 TEST_HEADER include/spdk/rpc.h 00:04:46.207 TEST_HEADER include/spdk/scheduler.h 00:04:46.207 TEST_HEADER include/spdk/scsi.h 00:04:46.207 CC test/env/mem_callbacks/mem_callbacks.o 00:04:46.207 TEST_HEADER include/spdk/scsi_spec.h 00:04:46.207 TEST_HEADER include/spdk/sock.h 00:04:46.207 TEST_HEADER include/spdk/stdinc.h 00:04:46.207 TEST_HEADER include/spdk/string.h 00:04:46.207 TEST_HEADER include/spdk/thread.h 00:04:46.207 TEST_HEADER include/spdk/trace.h 00:04:46.207 TEST_HEADER include/spdk/trace_parser.h 00:04:46.207 TEST_HEADER include/spdk/tree.h 00:04:46.207 TEST_HEADER include/spdk/ublk.h 00:04:46.207 TEST_HEADER include/spdk/util.h 00:04:46.207 TEST_HEADER include/spdk/uuid.h 00:04:46.207 TEST_HEADER include/spdk/version.h 00:04:46.207 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:46.207 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:46.207 TEST_HEADER include/spdk/vhost.h 00:04:46.207 TEST_HEADER include/spdk/vmd.h 00:04:46.207 TEST_HEADER include/spdk/xor.h 00:04:46.207 TEST_HEADER include/spdk/zipf.h 00:04:46.207 CXX test/cpp_headers/accel.o 00:04:46.207 LINK interrupt_tgt 00:04:46.207 LINK rpc_client_test 00:04:46.207 LINK poller_perf 00:04:46.207 LINK zipf 00:04:46.207 LINK bdev_svc 00:04:46.207 LINK ioat_perf 00:04:46.470 CXX test/cpp_headers/accel_module.o 00:04:46.470 LINK mem_callbacks 00:04:46.470 CC app/trace_record/trace_record.o 00:04:46.470 LINK spdk_trace 00:04:46.470 CC app/nvmf_tgt/nvmf_main.o 00:04:46.470 CXX test/cpp_headers/assert.o 00:04:46.470 CC app/iscsi_tgt/iscsi_tgt.o 00:04:46.728 CC test/env/vtophys/vtophys.o 00:04:46.728 CC examples/ioat/verify/verify.o 00:04:46.728 CC app/spdk_tgt/spdk_tgt.o 00:04:46.728 LINK test_dma 00:04:46.728 LINK nvmf_tgt 00:04:46.728 CXX test/cpp_headers/barrier.o 00:04:46.728 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:46.729 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:46.729 LINK vtophys 00:04:46.729 LINK spdk_trace_record 00:04:46.729 LINK iscsi_tgt 00:04:46.987 LINK spdk_tgt 00:04:46.987 LINK verify 00:04:46.987 CXX test/cpp_headers/base64.o 00:04:46.987 CXX test/cpp_headers/bdev.o 00:04:46.987 CXX test/cpp_headers/bdev_module.o 00:04:46.987 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:47.246 CC examples/thread/thread/thread_ex.o 00:04:47.246 CC app/spdk_lspci/spdk_lspci.o 00:04:47.246 CXX test/cpp_headers/bdev_zone.o 00:04:47.246 CC test/event/reactor/reactor.o 00:04:47.246 LINK env_dpdk_post_init 00:04:47.246 CC test/event/event_perf/event_perf.o 00:04:47.246 CC test/nvme/aer/aer.o 00:04:47.246 CC test/event/reactor_perf/reactor_perf.o 00:04:47.246 LINK nvme_fuzz 00:04:47.246 LINK spdk_lspci 00:04:47.504 CXX test/cpp_headers/bit_array.o 00:04:47.504 LINK event_perf 00:04:47.504 LINK reactor 00:04:47.504 LINK thread 00:04:47.504 LINK reactor_perf 00:04:47.504 CC test/env/memory/memory_ut.o 00:04:47.504 CC test/env/pci/pci_ut.o 00:04:47.504 CXX test/cpp_headers/bit_pool.o 00:04:47.504 LINK aer 00:04:47.763 CC app/spdk_nvme_perf/perf.o 00:04:47.763 CC test/event/app_repeat/app_repeat.o 00:04:47.763 CXX test/cpp_headers/blob_bdev.o 00:04:47.763 CC test/blobfs/mkfs/mkfs.o 00:04:47.763 CC test/accel/dif/dif.o 00:04:47.763 CC examples/sock/hello_world/hello_sock.o 00:04:48.023 CC test/nvme/reset/reset.o 00:04:48.023 LINK app_repeat 00:04:48.023 CXX test/cpp_headers/blobfs_bdev.o 00:04:48.023 LINK mkfs 00:04:48.023 LINK pci_ut 00:04:48.282 LINK hello_sock 00:04:48.282 CC test/event/scheduler/scheduler.o 00:04:48.282 LINK reset 00:04:48.282 CXX test/cpp_headers/blobfs.o 00:04:48.282 CXX test/cpp_headers/blob.o 00:04:48.282 CXX test/cpp_headers/conf.o 00:04:48.541 LINK memory_ut 00:04:48.541 CC test/nvme/sgl/sgl.o 00:04:48.541 CC examples/vmd/lsvmd/lsvmd.o 00:04:48.541 LINK scheduler 00:04:48.541 CC app/spdk_nvme_identify/identify.o 00:04:48.541 CXX test/cpp_headers/config.o 00:04:48.541 CC app/spdk_nvme_discover/discovery_aer.o 00:04:48.541 CXX test/cpp_headers/cpuset.o 00:04:48.541 LINK lsvmd 00:04:48.799 CXX test/cpp_headers/crc16.o 00:04:48.799 LINK dif 00:04:48.799 LINK spdk_nvme_perf 00:04:48.799 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:48.799 LINK spdk_nvme_discover 00:04:48.799 LINK sgl 00:04:48.799 CC app/spdk_top/spdk_top.o 00:04:48.799 CXX test/cpp_headers/crc32.o 00:04:49.058 CC examples/vmd/led/led.o 00:04:49.059 CXX test/cpp_headers/crc64.o 00:04:49.059 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:49.059 CXX test/cpp_headers/dif.o 00:04:49.059 LINK iscsi_fuzz 00:04:49.059 CC test/nvme/e2edp/nvme_dp.o 00:04:49.059 LINK led 00:04:49.059 CXX test/cpp_headers/dma.o 00:04:49.317 CC examples/idxd/perf/perf.o 00:04:49.317 CC test/lvol/esnap/esnap.o 00:04:49.317 CXX test/cpp_headers/endian.o 00:04:49.317 CXX test/cpp_headers/env_dpdk.o 00:04:49.317 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:49.318 LINK nvme_dp 00:04:49.576 CXX test/cpp_headers/env.o 00:04:49.576 LINK vhost_fuzz 00:04:49.576 CC examples/accel/perf/accel_perf.o 00:04:49.576 LINK spdk_nvme_identify 00:04:49.576 CC examples/blob/hello_world/hello_blob.o 00:04:49.576 LINK idxd_perf 00:04:49.576 LINK hello_fsdev 00:04:49.576 CXX test/cpp_headers/event.o 00:04:49.836 CC test/nvme/overhead/overhead.o 00:04:49.836 CC test/app/histogram_perf/histogram_perf.o 00:04:49.836 CXX test/cpp_headers/fd_group.o 00:04:49.836 CC examples/blob/cli/blobcli.o 00:04:49.836 LINK hello_blob 00:04:50.094 CC test/app/jsoncat/jsoncat.o 00:04:50.095 LINK histogram_perf 00:04:50.095 LINK spdk_top 00:04:50.095 CC examples/nvme/hello_world/hello_world.o 00:04:50.095 CXX test/cpp_headers/fd.o 00:04:50.095 LINK overhead 00:04:50.095 LINK jsoncat 00:04:50.095 CXX test/cpp_headers/file.o 00:04:50.095 CXX test/cpp_headers/fsdev.o 00:04:50.095 LINK accel_perf 00:04:50.353 LINK hello_world 00:04:50.353 CC app/vhost/vhost.o 00:04:50.353 CC test/app/stub/stub.o 00:04:50.353 CXX test/cpp_headers/fsdev_module.o 00:04:50.353 CXX test/cpp_headers/ftl.o 00:04:50.353 CC test/nvme/err_injection/err_injection.o 00:04:50.353 CXX test/cpp_headers/fuse_dispatcher.o 00:04:50.353 CC examples/nvme/reconnect/reconnect.o 00:04:50.353 CXX test/cpp_headers/gpt_spec.o 00:04:50.612 LINK blobcli 00:04:50.612 LINK vhost 00:04:50.612 LINK stub 00:04:50.612 LINK err_injection 00:04:50.612 CXX test/cpp_headers/hexlify.o 00:04:50.612 CXX test/cpp_headers/histogram_data.o 00:04:50.612 CXX test/cpp_headers/idxd.o 00:04:50.871 CC examples/bdev/hello_world/hello_bdev.o 00:04:50.871 CXX test/cpp_headers/idxd_spec.o 00:04:50.871 CC examples/bdev/bdevperf/bdevperf.o 00:04:50.871 LINK reconnect 00:04:50.871 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:50.871 CC app/spdk_dd/spdk_dd.o 00:04:50.871 CC test/nvme/startup/startup.o 00:04:50.871 CXX test/cpp_headers/init.o 00:04:50.871 CXX test/cpp_headers/ioat.o 00:04:50.871 CC test/bdev/bdevio/bdevio.o 00:04:51.130 CXX test/cpp_headers/ioat_spec.o 00:04:51.130 LINK hello_bdev 00:04:51.130 LINK startup 00:04:51.130 CXX test/cpp_headers/iscsi_spec.o 00:04:51.130 CC test/nvme/reserve/reserve.o 00:04:51.389 CXX test/cpp_headers/json.o 00:04:51.389 CC test/nvme/simple_copy/simple_copy.o 00:04:51.389 LINK spdk_dd 00:04:51.389 CC test/nvme/connect_stress/connect_stress.o 00:04:51.389 CC test/nvme/boot_partition/boot_partition.o 00:04:51.389 CXX test/cpp_headers/jsonrpc.o 00:04:51.389 LINK reserve 00:04:51.389 LINK bdevio 00:04:51.389 LINK nvme_manage 00:04:51.647 LINK boot_partition 00:04:51.647 LINK connect_stress 00:04:51.647 LINK simple_copy 00:04:51.647 CXX test/cpp_headers/keyring.o 00:04:51.647 CC app/fio/nvme/fio_plugin.o 00:04:51.647 CC examples/nvme/arbitration/arbitration.o 00:04:51.647 CC test/nvme/compliance/nvme_compliance.o 00:04:51.905 CXX test/cpp_headers/keyring_module.o 00:04:51.906 CC test/nvme/fused_ordering/fused_ordering.o 00:04:51.906 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:51.906 LINK bdevperf 00:04:51.906 CC examples/nvme/hotplug/hotplug.o 00:04:51.906 CC test/nvme/fdp/fdp.o 00:04:51.906 CXX test/cpp_headers/likely.o 00:04:52.166 LINK doorbell_aers 00:04:52.166 LINK fused_ordering 00:04:52.166 LINK hotplug 00:04:52.166 LINK arbitration 00:04:52.166 CC test/nvme/cuse/cuse.o 00:04:52.166 CXX test/cpp_headers/log.o 00:04:52.166 LINK nvme_compliance 00:04:52.166 LINK fdp 00:04:52.428 CXX test/cpp_headers/lvol.o 00:04:52.428 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.428 CC examples/nvme/abort/abort.o 00:04:52.428 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.428 CXX test/cpp_headers/md5.o 00:04:52.428 CXX test/cpp_headers/memory.o 00:04:52.428 LINK spdk_nvme 00:04:52.428 CC app/fio/bdev/fio_plugin.o 00:04:52.428 CXX test/cpp_headers/mmio.o 00:04:52.685 CXX test/cpp_headers/nbd.o 00:04:52.685 LINK cmb_copy 00:04:52.685 CXX test/cpp_headers/net.o 00:04:52.685 LINK pmr_persistence 00:04:52.685 CXX test/cpp_headers/notify.o 00:04:52.685 CXX test/cpp_headers/nvme.o 00:04:52.685 CXX test/cpp_headers/nvme_intel.o 00:04:52.685 CXX test/cpp_headers/nvme_ocssd.o 00:04:52.685 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:52.685 CXX test/cpp_headers/nvme_spec.o 00:04:52.943 CXX test/cpp_headers/nvme_zns.o 00:04:52.943 LINK abort 00:04:52.943 CXX test/cpp_headers/nvmf_cmd.o 00:04:52.943 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:52.943 CXX test/cpp_headers/nvmf.o 00:04:52.943 CXX test/cpp_headers/nvmf_spec.o 00:04:52.943 CXX test/cpp_headers/nvmf_transport.o 00:04:52.943 CXX test/cpp_headers/opal.o 00:04:53.202 CXX test/cpp_headers/opal_spec.o 00:04:53.202 CXX test/cpp_headers/pci_ids.o 00:04:53.202 LINK spdk_bdev 00:04:53.202 CXX test/cpp_headers/pipe.o 00:04:53.202 CXX test/cpp_headers/queue.o 00:04:53.202 CXX test/cpp_headers/reduce.o 00:04:53.202 CXX test/cpp_headers/rpc.o 00:04:53.202 CXX test/cpp_headers/scheduler.o 00:04:53.202 CC examples/nvmf/nvmf/nvmf.o 00:04:53.202 CXX test/cpp_headers/scsi.o 00:04:53.202 CXX test/cpp_headers/scsi_spec.o 00:04:53.202 CXX test/cpp_headers/sock.o 00:04:53.202 CXX test/cpp_headers/stdinc.o 00:04:53.468 CXX test/cpp_headers/string.o 00:04:53.468 CXX test/cpp_headers/thread.o 00:04:53.468 CXX test/cpp_headers/trace.o 00:04:53.468 CXX test/cpp_headers/trace_parser.o 00:04:53.468 CXX test/cpp_headers/tree.o 00:04:53.468 CXX test/cpp_headers/ublk.o 00:04:53.468 CXX test/cpp_headers/util.o 00:04:53.468 CXX test/cpp_headers/uuid.o 00:04:53.468 CXX test/cpp_headers/version.o 00:04:53.468 CXX test/cpp_headers/vfio_user_pci.o 00:04:53.468 CXX test/cpp_headers/vfio_user_spec.o 00:04:53.726 CXX test/cpp_headers/vhost.o 00:04:53.726 LINK nvmf 00:04:53.726 CXX test/cpp_headers/vmd.o 00:04:53.726 CXX test/cpp_headers/xor.o 00:04:53.726 CXX test/cpp_headers/zipf.o 00:04:53.726 LINK cuse 00:04:56.257 LINK esnap 00:04:56.516 00:04:56.516 real 1m30.946s 00:04:56.516 user 7m37.596s 00:04:56.516 sys 1m13.162s 00:04:56.516 05:52:19 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:56.516 05:52:19 make -- common/autotest_common.sh@10 -- $ set +x 00:04:56.516 ************************************ 00:04:56.516 END TEST make 00:04:56.516 ************************************ 00:04:56.516 05:52:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:56.516 05:52:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:56.516 05:52:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:56.516 05:52:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.516 05:52:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:56.516 05:52:19 -- pm/common@44 -- $ pid=6065 00:04:56.516 05:52:19 -- pm/common@50 -- $ kill -TERM 6065 00:04:56.516 05:52:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.516 05:52:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:56.516 05:52:19 -- pm/common@44 -- $ pid=6067 00:04:56.516 05:52:19 -- pm/common@50 -- $ kill -TERM 6067 00:04:56.775 05:52:19 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.775 05:52:19 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.775 05:52:19 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.775 05:52:19 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.775 05:52:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.775 05:52:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.775 05:52:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.775 05:52:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.775 05:52:19 -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.775 05:52:19 -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.775 05:52:19 -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.775 05:52:19 -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.775 05:52:19 -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.775 05:52:19 -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.775 05:52:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.775 05:52:19 -- scripts/common.sh@344 -- # case "$op" in 00:04:56.775 05:52:19 -- scripts/common.sh@345 -- # : 1 00:04:56.775 05:52:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.775 05:52:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.775 05:52:19 -- scripts/common.sh@365 -- # decimal 1 00:04:56.775 05:52:19 -- scripts/common.sh@353 -- # local d=1 00:04:56.775 05:52:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.775 05:52:19 -- scripts/common.sh@355 -- # echo 1 00:04:56.775 05:52:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.775 05:52:19 -- scripts/common.sh@366 -- # decimal 2 00:04:56.775 05:52:19 -- scripts/common.sh@353 -- # local d=2 00:04:56.775 05:52:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.775 05:52:19 -- scripts/common.sh@355 -- # echo 2 00:04:56.775 05:52:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.775 05:52:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.775 05:52:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.775 05:52:19 -- scripts/common.sh@368 -- # return 0 00:04:56.776 05:52:19 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.776 05:52:19 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 05:52:19 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 05:52:19 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 05:52:19 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 05:52:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.776 05:52:19 -- nvmf/common.sh@7 -- # uname -s 00:04:56.776 05:52:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.776 05:52:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.776 05:52:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.776 05:52:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.776 05:52:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.776 05:52:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.776 05:52:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.776 05:52:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.776 05:52:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.776 05:52:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.776 05:52:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00b433b8-09e1-47cf-b4d0-42bc79698308 00:04:56.776 05:52:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00b433b8-09e1-47cf-b4d0-42bc79698308 00:04:56.776 05:52:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.776 05:52:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.776 05:52:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.776 05:52:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.776 05:52:19 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.776 05:52:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.776 05:52:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.776 05:52:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.776 05:52:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.776 05:52:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 05:52:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 05:52:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 05:52:19 -- paths/export.sh@5 -- # export PATH 00:04:56.776 05:52:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 05:52:19 -- nvmf/common.sh@51 -- # : 0 00:04:56.776 05:52:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.776 05:52:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.776 05:52:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.776 05:52:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.776 05:52:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.776 05:52:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.776 05:52:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.776 05:52:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.776 05:52:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.776 05:52:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:56.776 05:52:19 -- spdk/autotest.sh@32 -- # uname -s 00:04:56.776 05:52:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:56.776 05:52:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:56.776 05:52:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.776 05:52:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:56.776 05:52:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.776 05:52:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:56.776 05:52:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:56.776 05:52:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:56.776 05:52:19 -- spdk/autotest.sh@48 -- # udevadm_pid=67454 00:04:56.776 05:52:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:56.776 05:52:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:56.776 05:52:19 -- pm/common@17 -- # local monitor 00:04:56.776 05:52:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.776 05:52:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.776 05:52:19 -- pm/common@25 -- # sleep 1 00:04:56.776 05:52:19 -- pm/common@21 -- # date +%s 00:04:56.776 05:52:19 -- pm/common@21 -- # date +%s 00:04:56.776 05:52:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733637139 00:04:56.776 05:52:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733637139 00:04:56.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733637139_collect-vmstat.pm.log 00:04:56.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733637139_collect-cpu-load.pm.log 00:04:58.152 05:52:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:58.152 05:52:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:58.152 05:52:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.152 05:52:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.152 05:52:20 -- spdk/autotest.sh@59 -- # create_test_list 00:04:58.152 05:52:20 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:58.152 05:52:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.152 05:52:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:58.152 05:52:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:58.152 05:52:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:58.152 05:52:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:58.152 05:52:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:58.152 05:52:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:58.152 05:52:20 -- common/autotest_common.sh@1455 -- # uname 00:04:58.152 05:52:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:58.152 05:52:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:58.152 05:52:20 -- common/autotest_common.sh@1475 -- # uname 00:04:58.152 05:52:20 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:58.152 05:52:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:58.152 05:52:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:58.152 lcov: LCOV version 1.15 00:04:58.152 05:52:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:13.111 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:13.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:31.225 05:52:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:31.225 05:52:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.225 05:52:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.225 05:52:52 -- spdk/autotest.sh@78 -- # rm -f 00:05:31.225 05:52:52 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.225 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:31.225 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:31.225 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:31.225 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:31.225 05:52:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:31.225 05:52:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:31.225 05:52:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:31.225 05:52:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.225 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.225 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.225 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.225 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:31.225 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:31.225 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.225 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:31.225 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:31.225 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.225 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:31.225 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.225 05:52:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.226 05:52:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:31.226 05:52:53 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:31.226 05:52:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:31.226 05:52:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:31.226 05:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.226 05:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:31.226 05:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:31.226 05:52:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.226 No valid GPT data, bailing 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # pt= 00:05:31.226 05:52:53 -- scripts/common.sh@395 -- # return 1 00:05:31.226 05:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.226 1+0 records in 00:05:31.226 1+0 records out 00:05:31.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109762 s, 95.5 MB/s 00:05:31.226 05:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.226 05:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:31.226 05:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:31.226 05:52:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:31.226 No valid GPT data, bailing 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # pt= 00:05:31.226 05:52:53 -- scripts/common.sh@395 -- # return 1 00:05:31.226 05:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:31.226 1+0 records in 00:05:31.226 1+0 records out 00:05:31.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432414 s, 242 MB/s 00:05:31.226 05:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.226 05:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:31.226 05:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:31.226 05:52:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:31.226 No valid GPT data, bailing 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # pt= 00:05:31.226 05:52:53 -- scripts/common.sh@395 -- # return 1 00:05:31.226 05:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:31.226 1+0 records in 00:05:31.226 1+0 records out 00:05:31.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418972 s, 250 MB/s 00:05:31.226 05:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.226 05:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:31.226 05:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:31.226 05:52:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:31.226 No valid GPT data, bailing 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # pt= 00:05:31.226 05:52:53 -- scripts/common.sh@395 -- # return 1 00:05:31.226 05:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:31.226 1+0 records in 00:05:31.226 1+0 records out 00:05:31.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450009 s, 233 MB/s 00:05:31.226 05:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.226 05:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:31.226 05:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:31.226 05:52:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:31.226 No valid GPT data, bailing 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # pt= 00:05:31.226 05:52:53 -- scripts/common.sh@395 -- # return 1 00:05:31.226 05:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:31.226 1+0 records in 00:05:31.226 1+0 records out 00:05:31.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381874 s, 275 MB/s 00:05:31.226 05:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.226 05:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.226 05:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:31.226 05:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:31.226 05:52:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:31.226 No valid GPT data, bailing 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:31.226 05:52:53 -- scripts/common.sh@394 -- # pt= 00:05:31.226 05:52:53 -- scripts/common.sh@395 -- # return 1 00:05:31.226 05:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:31.226 1+0 records in 00:05:31.226 1+0 records out 00:05:31.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466797 s, 225 MB/s 00:05:31.226 05:52:53 -- spdk/autotest.sh@105 -- # sync 00:05:31.226 05:52:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.226 05:52:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.226 05:52:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:32.600 05:52:55 -- spdk/autotest.sh@111 -- # uname -s 00:05:32.600 05:52:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:32.600 05:52:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:32.600 05:52:55 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.731 Hugepages 00:05:33.731 node hugesize free / total 00:05:33.731 node0 1048576kB 0 / 0 00:05:33.731 node0 2048kB 0 / 0 00:05:33.731 00:05:33.731 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.731 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.731 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.990 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:33.990 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:33.990 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:33.990 05:52:56 -- spdk/autotest.sh@117 -- # uname -s 00:05:33.990 05:52:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:33.990 05:52:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:33.990 05:52:56 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.124 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.124 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.124 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.124 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.124 05:52:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:36.500 05:52:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:36.500 05:52:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:36.500 05:52:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.500 05:52:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:36.500 05:52:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:36.500 05:52:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:36.500 05:52:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.500 05:52:59 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.500 05:52:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:36.500 05:52:59 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:36.500 05:52:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.500 05:52:59 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.758 Waiting for block devices as requested 00:05:37.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.017 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.017 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.017 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.289 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:42.289 05:53:05 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.289 05:53:05 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.289 05:53:05 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.289 05:53:05 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.289 05:53:05 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.289 05:53:05 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.289 05:53:05 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:42.289 05:53:05 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:42.289 05:53:05 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:42.289 05:53:05 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:42.289 05:53:05 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.289 05:53:05 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.289 05:53:05 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.289 05:53:05 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.289 05:53:05 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.289 05:53:05 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.289 05:53:05 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:42.289 05:53:05 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.289 05:53:05 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.289 05:53:05 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.289 05:53:05 -- common/autotest_common.sh@1541 -- # continue 00:05:42.289 05:53:05 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.289 05:53:05 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.289 05:53:05 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.289 05:53:05 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.289 05:53:05 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:42.289 05:53:05 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:42.289 05:53:05 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.290 05:53:05 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.290 05:53:05 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.290 05:53:05 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1541 -- # continue 00:05:42.290 05:53:05 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.290 05:53:05 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:42.290 05:53:05 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:42.290 05:53:05 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.290 05:53:05 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.290 05:53:05 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.290 05:53:05 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1541 -- # continue 00:05:42.290 05:53:05 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.290 05:53:05 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:42.290 05:53:05 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:42.290 05:53:05 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.290 05:53:05 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.290 05:53:05 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.290 05:53:05 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.290 05:53:05 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.290 05:53:05 -- common/autotest_common.sh@1541 -- # continue 00:05:42.290 05:53:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.290 05:53:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.290 05:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.290 05:53:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.290 05:53:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.290 05:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.290 05:53:05 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.422 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.422 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.680 05:53:06 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:43.680 05:53:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.680 05:53:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.680 05:53:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:43.680 05:53:06 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:43.680 05:53:06 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.680 05:53:06 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:43.680 05:53:06 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:43.680 05:53:06 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:43.680 05:53:06 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:43.680 05:53:06 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:43.680 05:53:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:43.680 05:53:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:43.680 05:53:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.680 05:53:06 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.680 05:53:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:43.680 05:53:06 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:43.680 05:53:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:43.680 05:53:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.680 05:53:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:43.680 05:53:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.680 05:53:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.680 05:53:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.680 05:53:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:43.681 05:53:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.681 05:53:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.681 05:53:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.681 05:53:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:43.681 05:53:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.681 05:53:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.681 05:53:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.681 05:53:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:43.681 05:53:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.681 05:53:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.681 05:53:06 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:43.681 05:53:06 -- common/autotest_common.sh@1570 -- # return 0 00:05:43.681 05:53:06 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:43.681 05:53:06 -- common/autotest_common.sh@1578 -- # return 0 00:05:43.681 05:53:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:43.681 05:53:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:43.681 05:53:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.681 05:53:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.681 05:53:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:43.681 05:53:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.681 05:53:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.681 05:53:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:43.681 05:53:06 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.681 05:53:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.681 05:53:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.681 05:53:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.681 ************************************ 00:05:43.681 START TEST env 00:05:43.681 ************************************ 00:05:43.681 05:53:06 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.938 * Looking for test storage... 00:05:43.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.938 05:53:06 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.938 05:53:06 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.938 05:53:06 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.938 05:53:06 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.938 05:53:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.938 05:53:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.938 05:53:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.938 05:53:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.938 05:53:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.938 05:53:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.938 05:53:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.938 05:53:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.938 05:53:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.938 05:53:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.938 05:53:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.938 05:53:06 env -- scripts/common.sh@344 -- # case "$op" in 00:05:43.938 05:53:06 env -- scripts/common.sh@345 -- # : 1 00:05:43.938 05:53:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.938 05:53:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.938 05:53:06 env -- scripts/common.sh@365 -- # decimal 1 00:05:43.939 05:53:06 env -- scripts/common.sh@353 -- # local d=1 00:05:43.939 05:53:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.939 05:53:06 env -- scripts/common.sh@355 -- # echo 1 00:05:43.939 05:53:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.939 05:53:06 env -- scripts/common.sh@366 -- # decimal 2 00:05:43.939 05:53:06 env -- scripts/common.sh@353 -- # local d=2 00:05:43.939 05:53:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.939 05:53:06 env -- scripts/common.sh@355 -- # echo 2 00:05:43.939 05:53:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.939 05:53:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.939 05:53:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.939 05:53:06 env -- scripts/common.sh@368 -- # return 0 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.939 --rc genhtml_branch_coverage=1 00:05:43.939 --rc genhtml_function_coverage=1 00:05:43.939 --rc genhtml_legend=1 00:05:43.939 --rc geninfo_all_blocks=1 00:05:43.939 --rc geninfo_unexecuted_blocks=1 00:05:43.939 00:05:43.939 ' 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.939 --rc genhtml_branch_coverage=1 00:05:43.939 --rc genhtml_function_coverage=1 00:05:43.939 --rc genhtml_legend=1 00:05:43.939 --rc geninfo_all_blocks=1 00:05:43.939 --rc geninfo_unexecuted_blocks=1 00:05:43.939 00:05:43.939 ' 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.939 --rc genhtml_branch_coverage=1 00:05:43.939 --rc genhtml_function_coverage=1 00:05:43.939 --rc genhtml_legend=1 00:05:43.939 --rc geninfo_all_blocks=1 00:05:43.939 --rc geninfo_unexecuted_blocks=1 00:05:43.939 00:05:43.939 ' 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.939 --rc genhtml_branch_coverage=1 00:05:43.939 --rc genhtml_function_coverage=1 00:05:43.939 --rc genhtml_legend=1 00:05:43.939 --rc geninfo_all_blocks=1 00:05:43.939 --rc geninfo_unexecuted_blocks=1 00:05:43.939 00:05:43.939 ' 00:05:43.939 05:53:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.939 05:53:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.939 05:53:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.939 ************************************ 00:05:43.939 START TEST env_memory 00:05:43.939 ************************************ 00:05:43.939 05:53:06 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.939 00:05:43.939 00:05:43.939 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.939 http://cunit.sourceforge.net/ 00:05:43.939 00:05:43.939 00:05:43.939 Suite: memory 00:05:43.939 Test: alloc and free memory map ...[2024-12-08 05:53:06.931782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.939 passed 00:05:44.196 Test: mem map translation ...[2024-12-08 05:53:06.992439] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:44.196 [2024-12-08 05:53:06.992516] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:44.196 [2024-12-08 05:53:06.992612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:44.196 [2024-12-08 05:53:06.992668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:44.196 passed 00:05:44.196 Test: mem map registration ...[2024-12-08 05:53:07.090658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:44.196 [2024-12-08 05:53:07.090720] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:44.196 passed 00:05:44.196 Test: mem map adjacent registrations ...passed 00:05:44.196 00:05:44.196 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.196 suites 1 1 n/a 0 0 00:05:44.196 tests 4 4 4 0 0 00:05:44.196 asserts 152 152 152 0 n/a 00:05:44.196 00:05:44.196 Elapsed time = 0.338 seconds 00:05:44.196 00:05:44.196 real 0m0.376s 00:05:44.196 user 0m0.348s 00:05:44.196 sys 0m0.020s 00:05:44.196 ************************************ 00:05:44.196 END TEST env_memory 00:05:44.196 ************************************ 00:05:44.196 05:53:07 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.196 05:53:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:44.455 05:53:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.455 05:53:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.455 05:53:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.455 05:53:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.455 ************************************ 00:05:44.455 START TEST env_vtophys 00:05:44.455 ************************************ 00:05:44.455 05:53:07 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.455 EAL: lib.eal log level changed from notice to debug 00:05:44.455 EAL: Detected lcore 0 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 1 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 2 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 3 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 4 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 5 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 6 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 7 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 8 as core 0 on socket 0 00:05:44.455 EAL: Detected lcore 9 as core 0 on socket 0 00:05:44.455 EAL: Maximum logical cores by configuration: 128 00:05:44.455 EAL: Detected CPU lcores: 10 00:05:44.455 EAL: Detected NUMA nodes: 1 00:05:44.455 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:44.455 EAL: Detected shared linkage of DPDK 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:44.455 EAL: Registered [vdev] bus. 00:05:44.455 EAL: bus.vdev log level changed from disabled to notice 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:44.455 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:44.455 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:44.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:44.455 EAL: No shared files mode enabled, IPC will be disabled 00:05:44.455 EAL: No shared files mode enabled, IPC is disabled 00:05:44.455 EAL: Selected IOVA mode 'PA' 00:05:44.455 EAL: Probing VFIO support... 00:05:44.455 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.455 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:44.455 EAL: Ask a virtual area of 0x2e000 bytes 00:05:44.455 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:44.455 EAL: Setting up physically contiguous memory... 00:05:44.455 EAL: Setting maximum number of open files to 524288 00:05:44.455 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:44.455 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:44.455 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.455 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:44.456 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.456 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.456 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:44.456 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:44.456 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.456 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:44.456 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.456 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.456 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:44.456 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:44.456 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.456 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:44.456 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.456 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.456 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:44.456 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:44.456 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.456 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:44.456 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.456 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.456 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:44.456 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:44.456 EAL: Hugepages will be freed exactly as allocated. 00:05:44.456 EAL: No shared files mode enabled, IPC is disabled 00:05:44.456 EAL: No shared files mode enabled, IPC is disabled 00:05:44.456 EAL: TSC frequency is ~2200000 KHz 00:05:44.456 EAL: Main lcore 0 is ready (tid=7f91bf9eea40;cpuset=[0]) 00:05:44.456 EAL: Trying to obtain current memory policy. 00:05:44.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.456 EAL: Restoring previous memory policy: 0 00:05:44.456 EAL: request: mp_malloc_sync 00:05:44.456 EAL: No shared files mode enabled, IPC is disabled 00:05:44.456 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.456 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.456 EAL: No shared files mode enabled, IPC is disabled 00:05:44.456 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:44.456 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.456 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:44.456 00:05:44.456 00:05:44.456 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.456 http://cunit.sourceforge.net/ 00:05:44.456 00:05:44.456 00:05:44.456 Suite: components_suite 00:05:45.022 Test: vtophys_malloc_test ...passed 00:05:45.022 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.022 EAL: Restoring previous memory policy: 4 00:05:45.022 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.022 EAL: request: mp_malloc_sync 00:05:45.022 EAL: No shared files mode enabled, IPC is disabled 00:05:45.022 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.022 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.022 EAL: request: mp_malloc_sync 00:05:45.022 EAL: No shared files mode enabled, IPC is disabled 00:05:45.022 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.022 EAL: Trying to obtain current memory policy. 00:05:45.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.022 EAL: Restoring previous memory policy: 4 00:05:45.022 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.022 EAL: request: mp_malloc_sync 00:05:45.022 EAL: No shared files mode enabled, IPC is disabled 00:05:45.022 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.022 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.022 EAL: request: mp_malloc_sync 00:05:45.022 EAL: No shared files mode enabled, IPC is disabled 00:05:45.022 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.023 EAL: Restoring previous memory policy: 4 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.023 EAL: Restoring previous memory policy: 4 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.023 EAL: Restoring previous memory policy: 4 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was expanded by 34MB 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was shrunk by 34MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.023 EAL: Restoring previous memory policy: 4 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was expanded by 66MB 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was shrunk by 66MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.023 EAL: Restoring previous memory policy: 4 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.023 EAL: Restoring previous memory policy: 4 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.023 EAL: request: mp_malloc_sync 00:05:45.023 EAL: No shared files mode enabled, IPC is disabled 00:05:45.023 EAL: Heap on socket 0 was shrunk by 258MB 00:05:45.023 EAL: Trying to obtain current memory policy. 00:05:45.023 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.282 EAL: Restoring previous memory policy: 4 00:05:45.282 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.282 EAL: request: mp_malloc_sync 00:05:45.282 EAL: No shared files mode enabled, IPC is disabled 00:05:45.282 EAL: Heap on socket 0 was expanded by 514MB 00:05:45.282 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.282 EAL: request: mp_malloc_sync 00:05:45.282 EAL: No shared files mode enabled, IPC is disabled 00:05:45.282 EAL: Heap on socket 0 was shrunk by 514MB 00:05:45.282 EAL: Trying to obtain current memory policy. 00:05:45.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.541 EAL: Restoring previous memory policy: 4 00:05:45.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.541 EAL: request: mp_malloc_sync 00:05:45.541 EAL: No shared files mode enabled, IPC is disabled 00:05:45.541 EAL: Heap on socket 0 was expanded by 1026MB 00:05:45.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.801 passed 00:05:45.801 00:05:45.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.801 suites 1 1 n/a 0 0 00:05:45.801 tests 2 2 2 0 0 00:05:45.801 asserts 5386 5386 5386 0 n/a 00:05:45.801 00:05:45.801 Elapsed time = 1.109 secondsEAL: request: mp_malloc_sync 00:05:45.801 EAL: No shared files mode enabled, IPC is disabled 00:05:45.801 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:45.801 00:05:45.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.801 EAL: request: mp_malloc_sync 00:05:45.801 EAL: No shared files mode enabled, IPC is disabled 00:05:45.801 EAL: Heap on socket 0 was shrunk by 2MB 00:05:45.801 EAL: No shared files mode enabled, IPC is disabled 00:05:45.801 EAL: No shared files mode enabled, IPC is disabled 00:05:45.801 EAL: No shared files mode enabled, IPC is disabled 00:05:45.801 00:05:45.801 real 0m1.355s 00:05:45.801 user 0m0.596s 00:05:45.801 sys 0m0.622s 00:05:45.801 05:53:08 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.801 ************************************ 00:05:45.801 END TEST env_vtophys 00:05:45.801 ************************************ 00:05:45.801 05:53:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:45.801 05:53:08 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.801 05:53:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.801 05:53:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.801 05:53:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.801 ************************************ 00:05:45.801 START TEST env_pci 00:05:45.801 ************************************ 00:05:45.801 05:53:08 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.801 00:05:45.801 00:05:45.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.801 http://cunit.sourceforge.net/ 00:05:45.801 00:05:45.801 00:05:45.801 Suite: pci 00:05:45.801 Test: pci_hook ...[2024-12-08 05:53:08.715883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70222 has claimed it 00:05:45.801 passed 00:05:45.801 00:05:45.801 EAL: Cannot find device (10000:00:01.0) 00:05:45.801 EAL: Failed to attach device on primary process 00:05:45.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.801 suites 1 1 n/a 0 0 00:05:45.801 tests 1 1 1 0 0 00:05:45.801 asserts 25 25 25 0 n/a 00:05:45.801 00:05:45.801 Elapsed time = 0.006 seconds 00:05:45.801 00:05:45.801 real 0m0.065s 00:05:45.801 user 0m0.029s 00:05:45.801 sys 0m0.035s 00:05:45.801 05:53:08 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.801 05:53:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:45.801 ************************************ 00:05:45.801 END TEST env_pci 00:05:45.801 ************************************ 00:05:45.801 05:53:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:45.801 05:53:08 env -- env/env.sh@15 -- # uname 00:05:45.801 05:53:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:45.801 05:53:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:45.801 05:53:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.801 05:53:08 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:45.801 05:53:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.801 05:53:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.801 ************************************ 00:05:45.801 START TEST env_dpdk_post_init 00:05:45.801 ************************************ 00:05:45.801 05:53:08 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:46.061 EAL: Detected CPU lcores: 10 00:05:46.061 EAL: Detected NUMA nodes: 1 00:05:46.061 EAL: Detected shared linkage of DPDK 00:05:46.061 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.061 EAL: Selected IOVA mode 'PA' 00:05:46.061 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.061 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:46.061 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:46.061 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:46.061 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:46.061 Starting DPDK initialization... 00:05:46.061 Starting SPDK post initialization... 00:05:46.061 SPDK NVMe probe 00:05:46.061 Attaching to 0000:00:10.0 00:05:46.061 Attaching to 0000:00:11.0 00:05:46.061 Attaching to 0000:00:12.0 00:05:46.061 Attaching to 0000:00:13.0 00:05:46.061 Attached to 0000:00:10.0 00:05:46.061 Attached to 0000:00:11.0 00:05:46.061 Attached to 0000:00:13.0 00:05:46.061 Attached to 0000:00:12.0 00:05:46.061 Cleaning up... 00:05:46.061 00:05:46.061 real 0m0.229s 00:05:46.061 user 0m0.057s 00:05:46.061 sys 0m0.074s 00:05:46.061 05:53:09 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.061 ************************************ 00:05:46.061 05:53:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.061 END TEST env_dpdk_post_init 00:05:46.061 ************************************ 00:05:46.061 05:53:09 env -- env/env.sh@26 -- # uname 00:05:46.061 05:53:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.061 05:53:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.061 05:53:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.061 05:53:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.061 05:53:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.061 ************************************ 00:05:46.061 START TEST env_mem_callbacks 00:05:46.061 ************************************ 00:05:46.061 05:53:09 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.320 EAL: Detected CPU lcores: 10 00:05:46.321 EAL: Detected NUMA nodes: 1 00:05:46.321 EAL: Detected shared linkage of DPDK 00:05:46.321 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.321 EAL: Selected IOVA mode 'PA' 00:05:46.321 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.321 00:05:46.321 00:05:46.321 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.321 http://cunit.sourceforge.net/ 00:05:46.321 00:05:46.321 00:05:46.321 Suite: memory 00:05:46.321 Test: test ... 00:05:46.321 register 0x200000200000 2097152 00:05:46.321 malloc 3145728 00:05:46.321 register 0x200000400000 4194304 00:05:46.321 buf 0x200000500000 len 3145728 PASSED 00:05:46.321 malloc 64 00:05:46.321 buf 0x2000004fff40 len 64 PASSED 00:05:46.321 malloc 4194304 00:05:46.321 register 0x200000800000 6291456 00:05:46.321 buf 0x200000a00000 len 4194304 PASSED 00:05:46.321 free 0x200000500000 3145728 00:05:46.321 free 0x2000004fff40 64 00:05:46.321 unregister 0x200000400000 4194304 PASSED 00:05:46.321 free 0x200000a00000 4194304 00:05:46.321 unregister 0x200000800000 6291456 PASSED 00:05:46.321 malloc 8388608 00:05:46.321 register 0x200000400000 10485760 00:05:46.321 buf 0x200000600000 len 8388608 PASSED 00:05:46.321 free 0x200000600000 8388608 00:05:46.321 unregister 0x200000400000 10485760 PASSED 00:05:46.321 passed 00:05:46.321 00:05:46.321 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.321 suites 1 1 n/a 0 0 00:05:46.321 tests 1 1 1 0 0 00:05:46.321 asserts 15 15 15 0 n/a 00:05:46.321 00:05:46.321 Elapsed time = 0.008 seconds 00:05:46.321 00:05:46.321 real 0m0.156s 00:05:46.321 user 0m0.025s 00:05:46.321 sys 0m0.029s 00:05:46.321 05:53:09 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.321 05:53:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.321 ************************************ 00:05:46.321 END TEST env_mem_callbacks 00:05:46.321 ************************************ 00:05:46.321 00:05:46.321 real 0m2.647s 00:05:46.321 user 0m1.259s 00:05:46.321 sys 0m1.029s 00:05:46.321 05:53:09 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.321 ************************************ 00:05:46.321 END TEST env 00:05:46.321 ************************************ 00:05:46.321 05:53:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.321 05:53:09 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.321 05:53:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.321 05:53:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.321 05:53:09 -- common/autotest_common.sh@10 -- # set +x 00:05:46.321 ************************************ 00:05:46.321 START TEST rpc 00:05:46.321 ************************************ 00:05:46.321 05:53:09 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.580 * Looking for test storage... 00:05:46.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.580 05:53:09 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.580 05:53:09 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.580 05:53:09 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.580 05:53:09 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.580 05:53:09 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.580 05:53:09 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.580 05:53:09 rpc -- scripts/common.sh@345 -- # : 1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.580 05:53:09 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.580 05:53:09 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.580 05:53:09 rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.580 05:53:09 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.580 05:53:09 rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.580 05:53:09 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.580 05:53:09 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.580 05:53:09 rpc -- scripts/common.sh@368 -- # return 0 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.580 --rc genhtml_branch_coverage=1 00:05:46.580 --rc genhtml_function_coverage=1 00:05:46.580 --rc genhtml_legend=1 00:05:46.580 --rc geninfo_all_blocks=1 00:05:46.580 --rc geninfo_unexecuted_blocks=1 00:05:46.580 00:05:46.580 ' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.580 --rc genhtml_branch_coverage=1 00:05:46.580 --rc genhtml_function_coverage=1 00:05:46.580 --rc genhtml_legend=1 00:05:46.580 --rc geninfo_all_blocks=1 00:05:46.580 --rc geninfo_unexecuted_blocks=1 00:05:46.580 00:05:46.580 ' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.580 --rc genhtml_branch_coverage=1 00:05:46.580 --rc genhtml_function_coverage=1 00:05:46.580 --rc genhtml_legend=1 00:05:46.580 --rc geninfo_all_blocks=1 00:05:46.580 --rc geninfo_unexecuted_blocks=1 00:05:46.580 00:05:46.580 ' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.580 --rc genhtml_branch_coverage=1 00:05:46.580 --rc genhtml_function_coverage=1 00:05:46.580 --rc genhtml_legend=1 00:05:46.580 --rc geninfo_all_blocks=1 00:05:46.580 --rc geninfo_unexecuted_blocks=1 00:05:46.580 00:05:46.580 ' 00:05:46.580 05:53:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70349 00:05:46.580 05:53:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.580 05:53:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:46.580 05:53:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70349 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@831 -- # '[' -z 70349 ']' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.580 05:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.845 [2024-12-08 05:53:09.657823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:46.845 [2024-12-08 05:53:09.658608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70349 ] 00:05:46.845 [2024-12-08 05:53:09.826609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.845 [2024-12-08 05:53:09.870326] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.845 [2024-12-08 05:53:09.870408] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70349' to capture a snapshot of events at runtime. 00:05:46.845 [2024-12-08 05:53:09.870437] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.845 [2024-12-08 05:53:09.870452] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.845 [2024-12-08 05:53:09.870498] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70349 for offline analysis/debug. 00:05:46.845 [2024-12-08 05:53:09.870552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.844 05:53:10 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.844 05:53:10 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.844 05:53:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.844 05:53:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.844 05:53:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.844 05:53:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.844 05:53:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.844 05:53:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.844 05:53:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.844 ************************************ 00:05:47.844 START TEST rpc_integrity 00:05:47.844 ************************************ 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.844 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.844 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.845 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.845 { 00:05:47.845 "name": "Malloc0", 00:05:47.845 "aliases": [ 00:05:47.845 "79e67e8a-18bb-467a-8ac5-606c1d21997e" 00:05:47.845 ], 00:05:47.845 "product_name": "Malloc disk", 00:05:47.845 "block_size": 512, 00:05:47.845 "num_blocks": 16384, 00:05:47.845 "uuid": "79e67e8a-18bb-467a-8ac5-606c1d21997e", 00:05:47.845 "assigned_rate_limits": { 00:05:47.845 "rw_ios_per_sec": 0, 00:05:47.845 "rw_mbytes_per_sec": 0, 00:05:47.845 "r_mbytes_per_sec": 0, 00:05:47.845 "w_mbytes_per_sec": 0 00:05:47.845 }, 00:05:47.845 "claimed": false, 00:05:47.845 "zoned": false, 00:05:47.845 "supported_io_types": { 00:05:47.845 "read": true, 00:05:47.845 "write": true, 00:05:47.845 "unmap": true, 00:05:47.845 "flush": true, 00:05:47.845 "reset": true, 00:05:47.845 "nvme_admin": false, 00:05:47.845 "nvme_io": false, 00:05:47.845 "nvme_io_md": false, 00:05:47.845 "write_zeroes": true, 00:05:47.845 "zcopy": true, 00:05:47.845 "get_zone_info": false, 00:05:47.845 "zone_management": false, 00:05:47.845 "zone_append": false, 00:05:47.845 "compare": false, 00:05:47.845 "compare_and_write": false, 00:05:47.845 "abort": true, 00:05:47.845 "seek_hole": false, 00:05:47.845 "seek_data": false, 00:05:47.845 "copy": true, 00:05:47.845 "nvme_iov_md": false 00:05:47.845 }, 00:05:47.845 "memory_domains": [ 00:05:47.845 { 00:05:47.845 "dma_device_id": "system", 00:05:47.845 "dma_device_type": 1 00:05:47.845 }, 00:05:47.845 { 00:05:47.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.845 "dma_device_type": 2 00:05:47.845 } 00:05:47.845 ], 00:05:47.845 "driver_specific": {} 00:05:47.845 } 00:05:47.845 ]' 00:05:47.845 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.845 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.845 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.845 [2024-12-08 05:53:10.814287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.845 [2024-12-08 05:53:10.814362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.845 [2024-12-08 05:53:10.814406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:47.845 [2024-12-08 05:53:10.814423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.845 [2024-12-08 05:53:10.817050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.845 [2024-12-08 05:53:10.817107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.845 Passthru0 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.845 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.845 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.845 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.845 { 00:05:47.845 "name": "Malloc0", 00:05:47.845 "aliases": [ 00:05:47.845 "79e67e8a-18bb-467a-8ac5-606c1d21997e" 00:05:47.845 ], 00:05:47.845 "product_name": "Malloc disk", 00:05:47.845 "block_size": 512, 00:05:47.845 "num_blocks": 16384, 00:05:47.845 "uuid": "79e67e8a-18bb-467a-8ac5-606c1d21997e", 00:05:47.845 "assigned_rate_limits": { 00:05:47.845 "rw_ios_per_sec": 0, 00:05:47.845 "rw_mbytes_per_sec": 0, 00:05:47.845 "r_mbytes_per_sec": 0, 00:05:47.845 "w_mbytes_per_sec": 0 00:05:47.845 }, 00:05:47.845 "claimed": true, 00:05:47.845 "claim_type": "exclusive_write", 00:05:47.845 "zoned": false, 00:05:47.845 "supported_io_types": { 00:05:47.845 "read": true, 00:05:47.845 "write": true, 00:05:47.845 "unmap": true, 00:05:47.845 "flush": true, 00:05:47.845 "reset": true, 00:05:47.845 "nvme_admin": false, 00:05:47.845 "nvme_io": false, 00:05:47.845 "nvme_io_md": false, 00:05:47.845 "write_zeroes": true, 00:05:47.845 "zcopy": true, 00:05:47.845 "get_zone_info": false, 00:05:47.845 "zone_management": false, 00:05:47.845 "zone_append": false, 00:05:47.845 "compare": false, 00:05:47.845 "compare_and_write": false, 00:05:47.845 "abort": true, 00:05:47.845 "seek_hole": false, 00:05:47.845 "seek_data": false, 00:05:47.845 "copy": true, 00:05:47.845 "nvme_iov_md": false 00:05:47.845 }, 00:05:47.845 "memory_domains": [ 00:05:47.845 { 00:05:47.845 "dma_device_id": "system", 00:05:47.845 "dma_device_type": 1 00:05:47.845 }, 00:05:47.845 { 00:05:47.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.845 "dma_device_type": 2 00:05:47.845 } 00:05:47.845 ], 00:05:47.845 "driver_specific": {} 00:05:47.845 }, 00:05:47.845 { 00:05:47.845 "name": "Passthru0", 00:05:47.845 "aliases": [ 00:05:47.845 "48e32afe-f512-5131-a037-56de77f7ae92" 00:05:47.845 ], 00:05:47.845 "product_name": "passthru", 00:05:47.845 "block_size": 512, 00:05:47.845 "num_blocks": 16384, 00:05:47.845 "uuid": "48e32afe-f512-5131-a037-56de77f7ae92", 00:05:47.845 "assigned_rate_limits": { 00:05:47.845 "rw_ios_per_sec": 0, 00:05:47.845 "rw_mbytes_per_sec": 0, 00:05:47.845 "r_mbytes_per_sec": 0, 00:05:47.845 "w_mbytes_per_sec": 0 00:05:47.845 }, 00:05:47.845 "claimed": false, 00:05:47.845 "zoned": false, 00:05:47.845 "supported_io_types": { 00:05:47.845 "read": true, 00:05:47.845 "write": true, 00:05:47.845 "unmap": true, 00:05:47.845 "flush": true, 00:05:47.845 "reset": true, 00:05:47.845 "nvme_admin": false, 00:05:47.845 "nvme_io": false, 00:05:47.845 "nvme_io_md": false, 00:05:47.845 "write_zeroes": true, 00:05:47.845 "zcopy": true, 00:05:47.845 "get_zone_info": false, 00:05:47.845 "zone_management": false, 00:05:47.845 "zone_append": false, 00:05:47.845 "compare": false, 00:05:47.845 "compare_and_write": false, 00:05:47.845 "abort": true, 00:05:47.845 "seek_hole": false, 00:05:47.845 "seek_data": false, 00:05:47.845 "copy": true, 00:05:47.845 "nvme_iov_md": false 00:05:47.845 }, 00:05:47.845 "memory_domains": [ 00:05:47.845 { 00:05:47.846 "dma_device_id": "system", 00:05:47.846 "dma_device_type": 1 00:05:47.846 }, 00:05:47.846 { 00:05:47.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.846 "dma_device_type": 2 00:05:47.846 } 00:05:47.846 ], 00:05:47.846 "driver_specific": { 00:05:47.846 "passthru": { 00:05:47.846 "name": "Passthru0", 00:05:47.846 "base_bdev_name": "Malloc0" 00:05:47.846 } 00:05:47.846 } 00:05:47.846 } 00:05:47.846 ]' 00:05:47.846 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.106 05:53:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.106 00:05:48.106 real 0m0.324s 00:05:48.106 user 0m0.223s 00:05:48.106 sys 0m0.032s 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.106 05:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 ************************************ 00:05:48.106 END TEST rpc_integrity 00:05:48.106 ************************************ 00:05:48.106 05:53:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.106 05:53:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.106 05:53:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.106 05:53:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 ************************************ 00:05:48.106 START TEST rpc_plugins 00:05:48.106 ************************************ 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.106 { 00:05:48.106 "name": "Malloc1", 00:05:48.106 "aliases": [ 00:05:48.106 "e9e96942-606d-44ad-867a-7336661b0a98" 00:05:48.106 ], 00:05:48.106 "product_name": "Malloc disk", 00:05:48.106 "block_size": 4096, 00:05:48.106 "num_blocks": 256, 00:05:48.106 "uuid": "e9e96942-606d-44ad-867a-7336661b0a98", 00:05:48.106 "assigned_rate_limits": { 00:05:48.106 "rw_ios_per_sec": 0, 00:05:48.106 "rw_mbytes_per_sec": 0, 00:05:48.106 "r_mbytes_per_sec": 0, 00:05:48.106 "w_mbytes_per_sec": 0 00:05:48.106 }, 00:05:48.106 "claimed": false, 00:05:48.106 "zoned": false, 00:05:48.106 "supported_io_types": { 00:05:48.106 "read": true, 00:05:48.106 "write": true, 00:05:48.106 "unmap": true, 00:05:48.106 "flush": true, 00:05:48.106 "reset": true, 00:05:48.106 "nvme_admin": false, 00:05:48.106 "nvme_io": false, 00:05:48.106 "nvme_io_md": false, 00:05:48.106 "write_zeroes": true, 00:05:48.106 "zcopy": true, 00:05:48.106 "get_zone_info": false, 00:05:48.106 "zone_management": false, 00:05:48.106 "zone_append": false, 00:05:48.106 "compare": false, 00:05:48.106 "compare_and_write": false, 00:05:48.106 "abort": true, 00:05:48.106 "seek_hole": false, 00:05:48.106 "seek_data": false, 00:05:48.106 "copy": true, 00:05:48.106 "nvme_iov_md": false 00:05:48.106 }, 00:05:48.106 "memory_domains": [ 00:05:48.106 { 00:05:48.106 "dma_device_id": "system", 00:05:48.106 "dma_device_type": 1 00:05:48.106 }, 00:05:48.106 { 00:05:48.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.106 "dma_device_type": 2 00:05:48.106 } 00:05:48.106 ], 00:05:48.106 "driver_specific": {} 00:05:48.106 } 00:05:48.106 ]' 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.106 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.364 05:53:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.364 00:05:48.364 real 0m0.146s 00:05:48.364 user 0m0.100s 00:05:48.364 sys 0m0.014s 00:05:48.364 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.364 05:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.364 ************************************ 00:05:48.364 END TEST rpc_plugins 00:05:48.364 ************************************ 00:05:48.364 05:53:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.364 05:53:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.364 05:53:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.364 05:53:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.364 ************************************ 00:05:48.364 START TEST rpc_trace_cmd_test 00:05:48.364 ************************************ 00:05:48.364 05:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:48.364 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.364 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.364 05:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.364 05:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.365 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70349", 00:05:48.365 "tpoint_group_mask": "0x8", 00:05:48.365 "iscsi_conn": { 00:05:48.365 "mask": "0x2", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "scsi": { 00:05:48.365 "mask": "0x4", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "bdev": { 00:05:48.365 "mask": "0x8", 00:05:48.365 "tpoint_mask": "0xffffffffffffffff" 00:05:48.365 }, 00:05:48.365 "nvmf_rdma": { 00:05:48.365 "mask": "0x10", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "nvmf_tcp": { 00:05:48.365 "mask": "0x20", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "ftl": { 00:05:48.365 "mask": "0x40", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "blobfs": { 00:05:48.365 "mask": "0x80", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "dsa": { 00:05:48.365 "mask": "0x200", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "thread": { 00:05:48.365 "mask": "0x400", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "nvme_pcie": { 00:05:48.365 "mask": "0x800", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "iaa": { 00:05:48.365 "mask": "0x1000", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "nvme_tcp": { 00:05:48.365 "mask": "0x2000", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "bdev_nvme": { 00:05:48.365 "mask": "0x4000", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "sock": { 00:05:48.365 "mask": "0x8000", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "blob": { 00:05:48.365 "mask": "0x10000", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 }, 00:05:48.365 "bdev_raid": { 00:05:48.365 "mask": "0x20000", 00:05:48.365 "tpoint_mask": "0x0" 00:05:48.365 } 00:05:48.365 }' 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.365 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.623 00:05:48.623 real 0m0.293s 00:05:48.623 user 0m0.248s 00:05:48.623 sys 0m0.029s 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.623 05:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.623 ************************************ 00:05:48.623 END TEST rpc_trace_cmd_test 00:05:48.623 ************************************ 00:05:48.623 05:53:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.623 05:53:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.623 05:53:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.623 05:53:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.623 05:53:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.623 05:53:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.623 ************************************ 00:05:48.623 START TEST rpc_daemon_integrity 00:05:48.623 ************************************ 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.623 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.623 { 00:05:48.623 "name": "Malloc2", 00:05:48.623 "aliases": [ 00:05:48.623 "6b3ba599-6231-4e65-8493-879c5c2419d5" 00:05:48.623 ], 00:05:48.623 "product_name": "Malloc disk", 00:05:48.623 "block_size": 512, 00:05:48.623 "num_blocks": 16384, 00:05:48.623 "uuid": "6b3ba599-6231-4e65-8493-879c5c2419d5", 00:05:48.623 "assigned_rate_limits": { 00:05:48.623 "rw_ios_per_sec": 0, 00:05:48.623 "rw_mbytes_per_sec": 0, 00:05:48.623 "r_mbytes_per_sec": 0, 00:05:48.623 "w_mbytes_per_sec": 0 00:05:48.623 }, 00:05:48.623 "claimed": false, 00:05:48.623 "zoned": false, 00:05:48.623 "supported_io_types": { 00:05:48.623 "read": true, 00:05:48.623 "write": true, 00:05:48.623 "unmap": true, 00:05:48.623 "flush": true, 00:05:48.623 "reset": true, 00:05:48.623 "nvme_admin": false, 00:05:48.623 "nvme_io": false, 00:05:48.623 "nvme_io_md": false, 00:05:48.623 "write_zeroes": true, 00:05:48.623 "zcopy": true, 00:05:48.623 "get_zone_info": false, 00:05:48.623 "zone_management": false, 00:05:48.623 "zone_append": false, 00:05:48.623 "compare": false, 00:05:48.623 "compare_and_write": false, 00:05:48.623 "abort": true, 00:05:48.623 "seek_hole": false, 00:05:48.624 "seek_data": false, 00:05:48.624 "copy": true, 00:05:48.624 "nvme_iov_md": false 00:05:48.624 }, 00:05:48.624 "memory_domains": [ 00:05:48.624 { 00:05:48.624 "dma_device_id": "system", 00:05:48.624 "dma_device_type": 1 00:05:48.624 }, 00:05:48.624 { 00:05:48.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.624 "dma_device_type": 2 00:05:48.624 } 00:05:48.624 ], 00:05:48.624 "driver_specific": {} 00:05:48.624 } 00:05:48.624 ]' 00:05:48.624 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 [2024-12-08 05:53:11.718981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.883 [2024-12-08 05:53:11.719066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.883 [2024-12-08 05:53:11.719099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:48.883 [2024-12-08 05:53:11.719115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.883 [2024-12-08 05:53:11.721786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.883 [2024-12-08 05:53:11.721843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.883 Passthru0 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.883 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.883 { 00:05:48.883 "name": "Malloc2", 00:05:48.883 "aliases": [ 00:05:48.883 "6b3ba599-6231-4e65-8493-879c5c2419d5" 00:05:48.883 ], 00:05:48.883 "product_name": "Malloc disk", 00:05:48.883 "block_size": 512, 00:05:48.883 "num_blocks": 16384, 00:05:48.883 "uuid": "6b3ba599-6231-4e65-8493-879c5c2419d5", 00:05:48.883 "assigned_rate_limits": { 00:05:48.883 "rw_ios_per_sec": 0, 00:05:48.883 "rw_mbytes_per_sec": 0, 00:05:48.883 "r_mbytes_per_sec": 0, 00:05:48.883 "w_mbytes_per_sec": 0 00:05:48.883 }, 00:05:48.883 "claimed": true, 00:05:48.883 "claim_type": "exclusive_write", 00:05:48.883 "zoned": false, 00:05:48.883 "supported_io_types": { 00:05:48.883 "read": true, 00:05:48.883 "write": true, 00:05:48.883 "unmap": true, 00:05:48.883 "flush": true, 00:05:48.883 "reset": true, 00:05:48.883 "nvme_admin": false, 00:05:48.883 "nvme_io": false, 00:05:48.883 "nvme_io_md": false, 00:05:48.883 "write_zeroes": true, 00:05:48.883 "zcopy": true, 00:05:48.883 "get_zone_info": false, 00:05:48.883 "zone_management": false, 00:05:48.883 "zone_append": false, 00:05:48.883 "compare": false, 00:05:48.883 "compare_and_write": false, 00:05:48.883 "abort": true, 00:05:48.883 "seek_hole": false, 00:05:48.883 "seek_data": false, 00:05:48.883 "copy": true, 00:05:48.883 "nvme_iov_md": false 00:05:48.883 }, 00:05:48.883 "memory_domains": [ 00:05:48.883 { 00:05:48.883 "dma_device_id": "system", 00:05:48.883 "dma_device_type": 1 00:05:48.883 }, 00:05:48.883 { 00:05:48.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.883 "dma_device_type": 2 00:05:48.883 } 00:05:48.883 ], 00:05:48.883 "driver_specific": {} 00:05:48.883 }, 00:05:48.883 { 00:05:48.883 "name": "Passthru0", 00:05:48.883 "aliases": [ 00:05:48.883 "1ab643b6-46f3-54d0-b277-c02887eeecd8" 00:05:48.883 ], 00:05:48.883 "product_name": "passthru", 00:05:48.883 "block_size": 512, 00:05:48.883 "num_blocks": 16384, 00:05:48.883 "uuid": "1ab643b6-46f3-54d0-b277-c02887eeecd8", 00:05:48.883 "assigned_rate_limits": { 00:05:48.883 "rw_ios_per_sec": 0, 00:05:48.883 "rw_mbytes_per_sec": 0, 00:05:48.883 "r_mbytes_per_sec": 0, 00:05:48.883 "w_mbytes_per_sec": 0 00:05:48.883 }, 00:05:48.883 "claimed": false, 00:05:48.883 "zoned": false, 00:05:48.883 "supported_io_types": { 00:05:48.883 "read": true, 00:05:48.883 "write": true, 00:05:48.883 "unmap": true, 00:05:48.883 "flush": true, 00:05:48.883 "reset": true, 00:05:48.883 "nvme_admin": false, 00:05:48.883 "nvme_io": false, 00:05:48.883 "nvme_io_md": false, 00:05:48.883 "write_zeroes": true, 00:05:48.883 "zcopy": true, 00:05:48.883 "get_zone_info": false, 00:05:48.883 "zone_management": false, 00:05:48.883 "zone_append": false, 00:05:48.883 "compare": false, 00:05:48.883 "compare_and_write": false, 00:05:48.883 "abort": true, 00:05:48.883 "seek_hole": false, 00:05:48.883 "seek_data": false, 00:05:48.883 "copy": true, 00:05:48.883 "nvme_iov_md": false 00:05:48.883 }, 00:05:48.883 "memory_domains": [ 00:05:48.883 { 00:05:48.883 "dma_device_id": "system", 00:05:48.884 "dma_device_type": 1 00:05:48.884 }, 00:05:48.884 { 00:05:48.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.884 "dma_device_type": 2 00:05:48.884 } 00:05:48.884 ], 00:05:48.884 "driver_specific": { 00:05:48.884 "passthru": { 00:05:48.884 "name": "Passthru0", 00:05:48.884 "base_bdev_name": "Malloc2" 00:05:48.884 } 00:05:48.884 } 00:05:48.884 } 00:05:48.884 ]' 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.884 00:05:48.884 real 0m0.316s 00:05:48.884 user 0m0.211s 00:05:48.884 sys 0m0.036s 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.884 05:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.884 ************************************ 00:05:48.884 END TEST rpc_daemon_integrity 00:05:48.884 ************************************ 00:05:49.143 05:53:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.143 05:53:11 rpc -- rpc/rpc.sh@84 -- # killprocess 70349 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@950 -- # '[' -z 70349 ']' 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@954 -- # kill -0 70349 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70349 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.143 killing process with pid 70349 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70349' 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@969 -- # kill 70349 00:05:49.143 05:53:11 rpc -- common/autotest_common.sh@974 -- # wait 70349 00:05:49.402 00:05:49.402 real 0m2.940s 00:05:49.402 user 0m3.831s 00:05:49.402 sys 0m0.707s 00:05:49.402 05:53:12 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.402 ************************************ 00:05:49.402 END TEST rpc 00:05:49.402 ************************************ 00:05:49.403 05:53:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.403 05:53:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.403 05:53:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.403 05:53:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.403 05:53:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.403 ************************************ 00:05:49.403 START TEST skip_rpc 00:05:49.403 ************************************ 00:05:49.403 05:53:12 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.403 * Looking for test storage... 00:05:49.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.403 05:53:12 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.403 05:53:12 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.403 05:53:12 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.662 05:53:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.662 --rc genhtml_branch_coverage=1 00:05:49.662 --rc genhtml_function_coverage=1 00:05:49.662 --rc genhtml_legend=1 00:05:49.662 --rc geninfo_all_blocks=1 00:05:49.662 --rc geninfo_unexecuted_blocks=1 00:05:49.662 00:05:49.662 ' 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.662 --rc genhtml_branch_coverage=1 00:05:49.662 --rc genhtml_function_coverage=1 00:05:49.662 --rc genhtml_legend=1 00:05:49.662 --rc geninfo_all_blocks=1 00:05:49.662 --rc geninfo_unexecuted_blocks=1 00:05:49.662 00:05:49.662 ' 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.662 --rc genhtml_branch_coverage=1 00:05:49.662 --rc genhtml_function_coverage=1 00:05:49.662 --rc genhtml_legend=1 00:05:49.662 --rc geninfo_all_blocks=1 00:05:49.662 --rc geninfo_unexecuted_blocks=1 00:05:49.662 00:05:49.662 ' 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.662 --rc genhtml_branch_coverage=1 00:05:49.662 --rc genhtml_function_coverage=1 00:05:49.662 --rc genhtml_legend=1 00:05:49.662 --rc geninfo_all_blocks=1 00:05:49.662 --rc geninfo_unexecuted_blocks=1 00:05:49.662 00:05:49.662 ' 00:05:49.662 05:53:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.662 05:53:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.662 05:53:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.662 05:53:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 ************************************ 00:05:49.663 START TEST skip_rpc 00:05:49.663 ************************************ 00:05:49.663 05:53:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:49.663 05:53:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70555 00:05:49.663 05:53:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.663 05:53:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.663 05:53:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.663 [2024-12-08 05:53:12.647620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:49.663 [2024-12-08 05:53:12.647811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70555 ] 00:05:49.922 [2024-12-08 05:53:12.794349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.922 [2024-12-08 05:53:12.827224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70555 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70555 ']' 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70555 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70555 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.193 killing process with pid 70555 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70555' 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70555 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70555 00:05:55.193 00:05:55.193 real 0m5.352s 00:05:55.193 user 0m4.976s 00:05:55.193 sys 0m0.281s 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.193 05:53:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.193 ************************************ 00:05:55.193 END TEST skip_rpc 00:05:55.193 ************************************ 00:05:55.193 05:53:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:55.193 05:53:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.193 05:53:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.193 05:53:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.193 ************************************ 00:05:55.193 START TEST skip_rpc_with_json 00:05:55.193 ************************************ 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70638 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70638 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70638 ']' 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.193 05:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.193 [2024-12-08 05:53:18.030053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:55.193 [2024-12-08 05:53:18.030241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70638 ] 00:05:55.193 [2024-12-08 05:53:18.171812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.193 [2024-12-08 05:53:18.209085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.453 [2024-12-08 05:53:18.378819] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.453 request: 00:05:55.453 { 00:05:55.453 "trtype": "tcp", 00:05:55.453 "method": "nvmf_get_transports", 00:05:55.453 "req_id": 1 00:05:55.453 } 00:05:55.453 Got JSON-RPC error response 00:05:55.453 response: 00:05:55.453 { 00:05:55.453 "code": -19, 00:05:55.453 "message": "No such device" 00:05:55.453 } 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.453 [2024-12-08 05:53:18.391047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.453 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.713 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.713 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.713 { 00:05:55.713 "subsystems": [ 00:05:55.713 { 00:05:55.713 "subsystem": "fsdev", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "fsdev_set_opts", 00:05:55.713 "params": { 00:05:55.713 "fsdev_io_pool_size": 65535, 00:05:55.713 "fsdev_io_cache_size": 256 00:05:55.713 } 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "keyring", 00:05:55.713 "config": [] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "iobuf", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "iobuf_set_options", 00:05:55.713 "params": { 00:05:55.713 "small_pool_count": 8192, 00:05:55.713 "large_pool_count": 1024, 00:05:55.713 "small_bufsize": 8192, 00:05:55.713 "large_bufsize": 135168 00:05:55.713 } 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "sock", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "sock_set_default_impl", 00:05:55.713 "params": { 00:05:55.713 "impl_name": "posix" 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "sock_impl_set_options", 00:05:55.713 "params": { 00:05:55.713 "impl_name": "ssl", 00:05:55.713 "recv_buf_size": 4096, 00:05:55.713 "send_buf_size": 4096, 00:05:55.713 "enable_recv_pipe": true, 00:05:55.713 "enable_quickack": false, 00:05:55.713 "enable_placement_id": 0, 00:05:55.713 "enable_zerocopy_send_server": true, 00:05:55.713 "enable_zerocopy_send_client": false, 00:05:55.713 "zerocopy_threshold": 0, 00:05:55.713 "tls_version": 0, 00:05:55.713 "enable_ktls": false 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "sock_impl_set_options", 00:05:55.713 "params": { 00:05:55.713 "impl_name": "posix", 00:05:55.713 "recv_buf_size": 2097152, 00:05:55.713 "send_buf_size": 2097152, 00:05:55.713 "enable_recv_pipe": true, 00:05:55.713 "enable_quickack": false, 00:05:55.713 "enable_placement_id": 0, 00:05:55.713 "enable_zerocopy_send_server": true, 00:05:55.713 "enable_zerocopy_send_client": false, 00:05:55.713 "zerocopy_threshold": 0, 00:05:55.713 "tls_version": 0, 00:05:55.713 "enable_ktls": false 00:05:55.713 } 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "vmd", 00:05:55.713 "config": [] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "accel", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "accel_set_options", 00:05:55.713 "params": { 00:05:55.713 "small_cache_size": 128, 00:05:55.713 "large_cache_size": 16, 00:05:55.713 "task_count": 2048, 00:05:55.713 "sequence_count": 2048, 00:05:55.713 "buf_count": 2048 00:05:55.713 } 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "bdev", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "bdev_set_options", 00:05:55.713 "params": { 00:05:55.713 "bdev_io_pool_size": 65535, 00:05:55.713 "bdev_io_cache_size": 256, 00:05:55.713 "bdev_auto_examine": true, 00:05:55.713 "iobuf_small_cache_size": 128, 00:05:55.713 "iobuf_large_cache_size": 16 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "bdev_raid_set_options", 00:05:55.713 "params": { 00:05:55.713 "process_window_size_kb": 1024, 00:05:55.713 "process_max_bandwidth_mb_sec": 0 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "bdev_iscsi_set_options", 00:05:55.713 "params": { 00:05:55.713 "timeout_sec": 30 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "bdev_nvme_set_options", 00:05:55.713 "params": { 00:05:55.713 "action_on_timeout": "none", 00:05:55.713 "timeout_us": 0, 00:05:55.713 "timeout_admin_us": 0, 00:05:55.713 "keep_alive_timeout_ms": 10000, 00:05:55.713 "arbitration_burst": 0, 00:05:55.713 "low_priority_weight": 0, 00:05:55.713 "medium_priority_weight": 0, 00:05:55.713 "high_priority_weight": 0, 00:05:55.713 "nvme_adminq_poll_period_us": 10000, 00:05:55.713 "nvme_ioq_poll_period_us": 0, 00:05:55.713 "io_queue_requests": 0, 00:05:55.713 "delay_cmd_submit": true, 00:05:55.713 "transport_retry_count": 4, 00:05:55.713 "bdev_retry_count": 3, 00:05:55.713 "transport_ack_timeout": 0, 00:05:55.713 "ctrlr_loss_timeout_sec": 0, 00:05:55.713 "reconnect_delay_sec": 0, 00:05:55.713 "fast_io_fail_timeout_sec": 0, 00:05:55.713 "disable_auto_failback": false, 00:05:55.713 "generate_uuids": false, 00:05:55.713 "transport_tos": 0, 00:05:55.713 "nvme_error_stat": false, 00:05:55.713 "rdma_srq_size": 0, 00:05:55.713 "io_path_stat": false, 00:05:55.713 "allow_accel_sequence": false, 00:05:55.713 "rdma_max_cq_size": 0, 00:05:55.713 "rdma_cm_event_timeout_ms": 0, 00:05:55.713 "dhchap_digests": [ 00:05:55.713 "sha256", 00:05:55.713 "sha384", 00:05:55.713 "sha512" 00:05:55.713 ], 00:05:55.713 "dhchap_dhgroups": [ 00:05:55.713 "null", 00:05:55.713 "ffdhe2048", 00:05:55.713 "ffdhe3072", 00:05:55.713 "ffdhe4096", 00:05:55.713 "ffdhe6144", 00:05:55.713 "ffdhe8192" 00:05:55.713 ] 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "bdev_nvme_set_hotplug", 00:05:55.713 "params": { 00:05:55.713 "period_us": 100000, 00:05:55.713 "enable": false 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "bdev_wait_for_examine" 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "scsi", 00:05:55.713 "config": null 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "scheduler", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "framework_set_scheduler", 00:05:55.713 "params": { 00:05:55.713 "name": "static" 00:05:55.713 } 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "vhost_scsi", 00:05:55.713 "config": [] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "vhost_blk", 00:05:55.713 "config": [] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "ublk", 00:05:55.713 "config": [] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "nbd", 00:05:55.713 "config": [] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "nvmf", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "nvmf_set_config", 00:05:55.713 "params": { 00:05:55.713 "discovery_filter": "match_any", 00:05:55.713 "admin_cmd_passthru": { 00:05:55.713 "identify_ctrlr": false 00:05:55.713 }, 00:05:55.713 "dhchap_digests": [ 00:05:55.713 "sha256", 00:05:55.713 "sha384", 00:05:55.713 "sha512" 00:05:55.713 ], 00:05:55.713 "dhchap_dhgroups": [ 00:05:55.713 "null", 00:05:55.713 "ffdhe2048", 00:05:55.713 "ffdhe3072", 00:05:55.713 "ffdhe4096", 00:05:55.713 "ffdhe6144", 00:05:55.713 "ffdhe8192" 00:05:55.713 ] 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "nvmf_set_max_subsystems", 00:05:55.713 "params": { 00:05:55.713 "max_subsystems": 1024 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "nvmf_set_crdt", 00:05:55.713 "params": { 00:05:55.713 "crdt1": 0, 00:05:55.713 "crdt2": 0, 00:05:55.713 "crdt3": 0 00:05:55.713 } 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "method": "nvmf_create_transport", 00:05:55.713 "params": { 00:05:55.713 "trtype": "TCP", 00:05:55.713 "max_queue_depth": 128, 00:05:55.713 "max_io_qpairs_per_ctrlr": 127, 00:05:55.713 "in_capsule_data_size": 4096, 00:05:55.713 "max_io_size": 131072, 00:05:55.713 "io_unit_size": 131072, 00:05:55.713 "max_aq_depth": 128, 00:05:55.713 "num_shared_buffers": 511, 00:05:55.713 "buf_cache_size": 4294967295, 00:05:55.713 "dif_insert_or_strip": false, 00:05:55.713 "zcopy": false, 00:05:55.713 "c2h_success": true, 00:05:55.713 "sock_priority": 0, 00:05:55.713 "abort_timeout_sec": 1, 00:05:55.713 "ack_timeout": 0, 00:05:55.713 "data_wr_pool_size": 0 00:05:55.713 } 00:05:55.713 } 00:05:55.713 ] 00:05:55.713 }, 00:05:55.713 { 00:05:55.713 "subsystem": "iscsi", 00:05:55.713 "config": [ 00:05:55.713 { 00:05:55.713 "method": "iscsi_set_options", 00:05:55.713 "params": { 00:05:55.713 "node_base": "iqn.2016-06.io.spdk", 00:05:55.713 "max_sessions": 128, 00:05:55.713 "max_connections_per_session": 2, 00:05:55.713 "max_queue_depth": 64, 00:05:55.713 "default_time2wait": 2, 00:05:55.713 "default_time2retain": 20, 00:05:55.713 "first_burst_length": 8192, 00:05:55.713 "immediate_data": true, 00:05:55.713 "allow_duplicated_isid": false, 00:05:55.713 "error_recovery_level": 0, 00:05:55.713 "nop_timeout": 60, 00:05:55.713 "nop_in_interval": 30, 00:05:55.713 "disable_chap": false, 00:05:55.713 "require_chap": false, 00:05:55.713 "mutual_chap": false, 00:05:55.713 "chap_group": 0, 00:05:55.714 "max_large_datain_per_connection": 64, 00:05:55.714 "max_r2t_per_connection": 4, 00:05:55.714 "pdu_pool_size": 36864, 00:05:55.714 "immediate_data_pool_size": 16384, 00:05:55.714 "data_out_pool_size": 2048 00:05:55.714 } 00:05:55.714 } 00:05:55.714 ] 00:05:55.714 } 00:05:55.714 ] 00:05:55.714 } 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70638 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70638 ']' 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70638 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70638 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.714 killing process with pid 70638 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70638' 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70638 00:05:55.714 05:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70638 00:05:55.972 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70659 00:05:55.972 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.972 05:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70659 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70659 ']' 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70659 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70659 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70659' 00:06:01.244 killing process with pid 70659 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70659 00:06:01.244 05:53:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70659 00:06:01.244 05:53:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.244 05:53:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.244 00:06:01.244 real 0m6.336s 00:06:01.244 user 0m5.921s 00:06:01.244 sys 0m0.577s 00:06:01.244 05:53:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.244 05:53:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.244 ************************************ 00:06:01.244 END TEST skip_rpc_with_json 00:06:01.244 ************************************ 00:06:01.504 05:53:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.504 05:53:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.504 05:53:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.504 05:53:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.504 ************************************ 00:06:01.504 START TEST skip_rpc_with_delay 00:06:01.504 ************************************ 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.504 [2024-12-08 05:53:24.442193] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.504 [2024-12-08 05:53:24.442419] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.504 00:06:01.504 real 0m0.187s 00:06:01.504 user 0m0.103s 00:06:01.504 sys 0m0.082s 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.504 ************************************ 00:06:01.504 END TEST skip_rpc_with_delay 00:06:01.504 ************************************ 00:06:01.504 05:53:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.763 05:53:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.763 05:53:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.763 05:53:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.763 05:53:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.763 05:53:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.763 05:53:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.763 ************************************ 00:06:01.763 START TEST exit_on_failed_rpc_init 00:06:01.763 ************************************ 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70771 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70771 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70771 ']' 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.763 05:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.763 [2024-12-08 05:53:24.666641] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.763 [2024-12-08 05:53:24.666940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:06:02.023 [2024-12-08 05:53:24.812637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.023 [2024-12-08 05:53:24.847952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.023 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.283 [2024-12-08 05:53:25.144767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:02.283 [2024-12-08 05:53:25.145000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70781 ] 00:06:02.283 [2024-12-08 05:53:25.291669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.542 [2024-12-08 05:53:25.335763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.542 [2024-12-08 05:53:25.335928] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.542 [2024-12-08 05:53:25.335962] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.542 [2024-12-08 05:53:25.335981] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70771 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70771 ']' 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70771 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70771 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.543 killing process with pid 70771 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70771' 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70771 00:06:02.543 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70771 00:06:02.801 00:06:02.801 real 0m1.242s 00:06:02.801 user 0m1.364s 00:06:02.801 sys 0m0.446s 00:06:02.801 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.801 05:53:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.801 ************************************ 00:06:02.801 END TEST exit_on_failed_rpc_init 00:06:02.801 ************************************ 00:06:03.061 05:53:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.061 00:06:03.061 real 0m13.520s 00:06:03.061 user 0m12.550s 00:06:03.061 sys 0m1.592s 00:06:03.061 05:53:25 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.061 05:53:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.061 ************************************ 00:06:03.061 END TEST skip_rpc 00:06:03.061 ************************************ 00:06:03.061 05:53:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.061 05:53:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.061 05:53:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.061 05:53:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.061 ************************************ 00:06:03.061 START TEST rpc_client 00:06:03.061 ************************************ 00:06:03.061 05:53:25 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.061 * Looking for test storage... 00:06:03.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:03.061 05:53:25 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.061 05:53:25 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.061 05:53:25 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.061 05:53:26 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.061 05:53:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:03.061 05:53:26 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.061 05:53:26 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.061 --rc genhtml_branch_coverage=1 00:06:03.061 --rc genhtml_function_coverage=1 00:06:03.061 --rc genhtml_legend=1 00:06:03.061 --rc geninfo_all_blocks=1 00:06:03.061 --rc geninfo_unexecuted_blocks=1 00:06:03.061 00:06:03.061 ' 00:06:03.061 05:53:26 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.061 --rc genhtml_branch_coverage=1 00:06:03.061 --rc genhtml_function_coverage=1 00:06:03.061 --rc genhtml_legend=1 00:06:03.061 --rc geninfo_all_blocks=1 00:06:03.061 --rc geninfo_unexecuted_blocks=1 00:06:03.061 00:06:03.061 ' 00:06:03.061 05:53:26 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.061 --rc genhtml_branch_coverage=1 00:06:03.061 --rc genhtml_function_coverage=1 00:06:03.061 --rc genhtml_legend=1 00:06:03.061 --rc geninfo_all_blocks=1 00:06:03.061 --rc geninfo_unexecuted_blocks=1 00:06:03.061 00:06:03.061 ' 00:06:03.061 05:53:26 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.061 --rc genhtml_branch_coverage=1 00:06:03.061 --rc genhtml_function_coverage=1 00:06:03.061 --rc genhtml_legend=1 00:06:03.061 --rc geninfo_all_blocks=1 00:06:03.061 --rc geninfo_unexecuted_blocks=1 00:06:03.061 00:06:03.061 ' 00:06:03.061 05:53:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:03.321 OK 00:06:03.321 05:53:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.321 00:06:03.321 real 0m0.235s 00:06:03.321 user 0m0.147s 00:06:03.321 sys 0m0.098s 00:06:03.321 05:53:26 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.321 05:53:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 ************************************ 00:06:03.321 END TEST rpc_client 00:06:03.321 ************************************ 00:06:03.321 05:53:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.321 05:53:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.321 05:53:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.321 05:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:03.321 ************************************ 00:06:03.321 START TEST json_config 00:06:03.321 ************************************ 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.321 05:53:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.321 05:53:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.321 05:53:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.321 05:53:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.321 05:53:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.321 05:53:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.321 05:53:26 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.321 05:53:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.321 05:53:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.321 05:53:26 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.321 05:53:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.321 05:53:26 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.321 05:53:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.321 05:53:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.321 05:53:26 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.321 --rc genhtml_branch_coverage=1 00:06:03.321 --rc genhtml_function_coverage=1 00:06:03.321 --rc genhtml_legend=1 00:06:03.321 --rc geninfo_all_blocks=1 00:06:03.321 --rc geninfo_unexecuted_blocks=1 00:06:03.321 00:06:03.321 ' 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.321 --rc genhtml_branch_coverage=1 00:06:03.321 --rc genhtml_function_coverage=1 00:06:03.321 --rc genhtml_legend=1 00:06:03.321 --rc geninfo_all_blocks=1 00:06:03.321 --rc geninfo_unexecuted_blocks=1 00:06:03.321 00:06:03.321 ' 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.321 --rc genhtml_branch_coverage=1 00:06:03.321 --rc genhtml_function_coverage=1 00:06:03.321 --rc genhtml_legend=1 00:06:03.321 --rc geninfo_all_blocks=1 00:06:03.321 --rc geninfo_unexecuted_blocks=1 00:06:03.321 00:06:03.321 ' 00:06:03.321 05:53:26 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.321 --rc genhtml_branch_coverage=1 00:06:03.321 --rc genhtml_function_coverage=1 00:06:03.321 --rc genhtml_legend=1 00:06:03.321 --rc geninfo_all_blocks=1 00:06:03.321 --rc geninfo_unexecuted_blocks=1 00:06:03.321 00:06:03.321 ' 00:06:03.321 05:53:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00b433b8-09e1-47cf-b4d0-42bc79698308 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00b433b8-09e1-47cf-b4d0-42bc79698308 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.321 05:53:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.321 05:53:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.321 05:53:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.322 05:53:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.322 05:53:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.322 05:53:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.322 05:53:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.322 05:53:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.322 05:53:26 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.322 05:53:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.322 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.322 05:53:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.580 WARNING: No tests are enabled so not running JSON configuration tests 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:03.580 05:53:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:03.580 00:06:03.580 real 0m0.175s 00:06:03.580 user 0m0.121s 00:06:03.580 sys 0m0.058s 00:06:03.580 05:53:26 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.580 05:53:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 ************************************ 00:06:03.580 END TEST json_config 00:06:03.580 ************************************ 00:06:03.580 05:53:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.580 05:53:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.580 05:53:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.580 05:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:03.580 ************************************ 00:06:03.580 START TEST json_config_extra_key 00:06:03.580 ************************************ 00:06:03.580 05:53:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.580 05:53:26 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.580 05:53:26 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.580 05:53:26 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.580 05:53:26 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.580 05:53:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.581 --rc genhtml_branch_coverage=1 00:06:03.581 --rc genhtml_function_coverage=1 00:06:03.581 --rc genhtml_legend=1 00:06:03.581 --rc geninfo_all_blocks=1 00:06:03.581 --rc geninfo_unexecuted_blocks=1 00:06:03.581 00:06:03.581 ' 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.581 --rc genhtml_branch_coverage=1 00:06:03.581 --rc genhtml_function_coverage=1 00:06:03.581 --rc genhtml_legend=1 00:06:03.581 --rc geninfo_all_blocks=1 00:06:03.581 --rc geninfo_unexecuted_blocks=1 00:06:03.581 00:06:03.581 ' 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.581 --rc genhtml_branch_coverage=1 00:06:03.581 --rc genhtml_function_coverage=1 00:06:03.581 --rc genhtml_legend=1 00:06:03.581 --rc geninfo_all_blocks=1 00:06:03.581 --rc geninfo_unexecuted_blocks=1 00:06:03.581 00:06:03.581 ' 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.581 --rc genhtml_branch_coverage=1 00:06:03.581 --rc genhtml_function_coverage=1 00:06:03.581 --rc genhtml_legend=1 00:06:03.581 --rc geninfo_all_blocks=1 00:06:03.581 --rc geninfo_unexecuted_blocks=1 00:06:03.581 00:06:03.581 ' 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00b433b8-09e1-47cf-b4d0-42bc79698308 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00b433b8-09e1-47cf-b4d0-42bc79698308 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.581 05:53:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.581 05:53:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.581 05:53:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.581 05:53:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.581 05:53:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:03.581 05:53:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.581 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.581 05:53:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.581 INFO: launching applications... 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:03.581 05:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70964 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.581 Waiting for target to run... 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70964 /var/tmp/spdk_tgt.sock 00:06:03.581 05:53:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70964 ']' 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.581 05:53:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.839 [2024-12-08 05:53:26.718315] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:03.839 [2024-12-08 05:53:26.718523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70964 ] 00:06:04.097 [2024-12-08 05:53:27.030140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.097 [2024-12-08 05:53:27.059530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.664 00:06:04.664 INFO: shutting down applications... 00:06:04.664 05:53:27 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.664 05:53:27 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:04.664 05:53:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:04.664 05:53:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70964 ]] 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70964 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70964 00:06:04.664 05:53:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70964 00:06:05.232 SPDK target shutdown done 00:06:05.232 Success 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:05.232 05:53:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:05.232 05:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:05.232 00:06:05.232 real 0m1.711s 00:06:05.232 user 0m1.513s 00:06:05.232 sys 0m0.403s 00:06:05.232 ************************************ 00:06:05.232 END TEST json_config_extra_key 00:06:05.232 ************************************ 00:06:05.232 05:53:28 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.232 05:53:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 05:53:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.232 05:53:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.232 05:53:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.232 05:53:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 ************************************ 00:06:05.232 START TEST alias_rpc 00:06:05.232 ************************************ 00:06:05.232 05:53:28 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.232 * Looking for test storage... 00:06:05.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:05.232 05:53:28 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.232 05:53:28 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.232 05:53:28 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.492 05:53:28 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.492 05:53:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:05.492 05:53:28 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.493 --rc genhtml_branch_coverage=1 00:06:05.493 --rc genhtml_function_coverage=1 00:06:05.493 --rc genhtml_legend=1 00:06:05.493 --rc geninfo_all_blocks=1 00:06:05.493 --rc geninfo_unexecuted_blocks=1 00:06:05.493 00:06:05.493 ' 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.493 --rc genhtml_branch_coverage=1 00:06:05.493 --rc genhtml_function_coverage=1 00:06:05.493 --rc genhtml_legend=1 00:06:05.493 --rc geninfo_all_blocks=1 00:06:05.493 --rc geninfo_unexecuted_blocks=1 00:06:05.493 00:06:05.493 ' 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.493 --rc genhtml_branch_coverage=1 00:06:05.493 --rc genhtml_function_coverage=1 00:06:05.493 --rc genhtml_legend=1 00:06:05.493 --rc geninfo_all_blocks=1 00:06:05.493 --rc geninfo_unexecuted_blocks=1 00:06:05.493 00:06:05.493 ' 00:06:05.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.493 --rc genhtml_branch_coverage=1 00:06:05.493 --rc genhtml_function_coverage=1 00:06:05.493 --rc genhtml_legend=1 00:06:05.493 --rc geninfo_all_blocks=1 00:06:05.493 --rc geninfo_unexecuted_blocks=1 00:06:05.493 00:06:05.493 ' 00:06:05.493 05:53:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.493 05:53:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71043 00:06:05.493 05:53:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71043 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71043 ']' 00:06:05.493 05:53:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.493 05:53:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.493 [2024-12-08 05:53:28.483917] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.493 [2024-12-08 05:53:28.484109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71043 ] 00:06:05.753 [2024-12-08 05:53:28.626226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.753 [2024-12-08 05:53:28.658361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.012 05:53:28 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.012 05:53:28 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:06.012 05:53:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:06.270 05:53:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71043 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71043 ']' 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71043 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71043 00:06:06.270 killing process with pid 71043 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71043' 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@969 -- # kill 71043 00:06:06.270 05:53:29 alias_rpc -- common/autotest_common.sh@974 -- # wait 71043 00:06:06.528 ************************************ 00:06:06.528 END TEST alias_rpc 00:06:06.528 ************************************ 00:06:06.528 00:06:06.528 real 0m1.282s 00:06:06.528 user 0m1.410s 00:06:06.528 sys 0m0.387s 00:06:06.528 05:53:29 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.528 05:53:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.528 05:53:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:06.528 05:53:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.528 05:53:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.528 05:53:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.528 05:53:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.528 ************************************ 00:06:06.528 START TEST spdkcli_tcp 00:06:06.528 ************************************ 00:06:06.529 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.787 * Looking for test storage... 00:06:06.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:06.787 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.787 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.787 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.787 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.787 05:53:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.788 05:53:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.788 --rc genhtml_branch_coverage=1 00:06:06.788 --rc genhtml_function_coverage=1 00:06:06.788 --rc genhtml_legend=1 00:06:06.788 --rc geninfo_all_blocks=1 00:06:06.788 --rc geninfo_unexecuted_blocks=1 00:06:06.788 00:06:06.788 ' 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.788 --rc genhtml_branch_coverage=1 00:06:06.788 --rc genhtml_function_coverage=1 00:06:06.788 --rc genhtml_legend=1 00:06:06.788 --rc geninfo_all_blocks=1 00:06:06.788 --rc geninfo_unexecuted_blocks=1 00:06:06.788 00:06:06.788 ' 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.788 --rc genhtml_branch_coverage=1 00:06:06.788 --rc genhtml_function_coverage=1 00:06:06.788 --rc genhtml_legend=1 00:06:06.788 --rc geninfo_all_blocks=1 00:06:06.788 --rc geninfo_unexecuted_blocks=1 00:06:06.788 00:06:06.788 ' 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.788 --rc genhtml_branch_coverage=1 00:06:06.788 --rc genhtml_function_coverage=1 00:06:06.788 --rc genhtml_legend=1 00:06:06.788 --rc geninfo_all_blocks=1 00:06:06.788 --rc geninfo_unexecuted_blocks=1 00:06:06.788 00:06:06.788 ' 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71115 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71115 00:06:06.788 05:53:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71115 ']' 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.788 05:53:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.788 [2024-12-08 05:53:29.801515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.788 [2024-12-08 05:53:29.801684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71115 ] 00:06:07.046 [2024-12-08 05:53:29.945908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.046 [2024-12-08 05:53:29.981623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.046 [2024-12-08 05:53:29.981685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.303 05:53:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.303 05:53:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:07.303 05:53:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71123 00:06:07.303 05:53:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.303 05:53:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.562 [ 00:06:07.562 "bdev_malloc_delete", 00:06:07.562 "bdev_malloc_create", 00:06:07.562 "bdev_null_resize", 00:06:07.562 "bdev_null_delete", 00:06:07.562 "bdev_null_create", 00:06:07.562 "bdev_nvme_cuse_unregister", 00:06:07.562 "bdev_nvme_cuse_register", 00:06:07.562 "bdev_opal_new_user", 00:06:07.562 "bdev_opal_set_lock_state", 00:06:07.562 "bdev_opal_delete", 00:06:07.562 "bdev_opal_get_info", 00:06:07.562 "bdev_opal_create", 00:06:07.562 "bdev_nvme_opal_revert", 00:06:07.562 "bdev_nvme_opal_init", 00:06:07.562 "bdev_nvme_send_cmd", 00:06:07.562 "bdev_nvme_set_keys", 00:06:07.562 "bdev_nvme_get_path_iostat", 00:06:07.562 "bdev_nvme_get_mdns_discovery_info", 00:06:07.562 "bdev_nvme_stop_mdns_discovery", 00:06:07.562 "bdev_nvme_start_mdns_discovery", 00:06:07.562 "bdev_nvme_set_multipath_policy", 00:06:07.562 "bdev_nvme_set_preferred_path", 00:06:07.562 "bdev_nvme_get_io_paths", 00:06:07.562 "bdev_nvme_remove_error_injection", 00:06:07.562 "bdev_nvme_add_error_injection", 00:06:07.562 "bdev_nvme_get_discovery_info", 00:06:07.562 "bdev_nvme_stop_discovery", 00:06:07.562 "bdev_nvme_start_discovery", 00:06:07.562 "bdev_nvme_get_controller_health_info", 00:06:07.562 "bdev_nvme_disable_controller", 00:06:07.562 "bdev_nvme_enable_controller", 00:06:07.562 "bdev_nvme_reset_controller", 00:06:07.562 "bdev_nvme_get_transport_statistics", 00:06:07.562 "bdev_nvme_apply_firmware", 00:06:07.562 "bdev_nvme_detach_controller", 00:06:07.562 "bdev_nvme_get_controllers", 00:06:07.562 "bdev_nvme_attach_controller", 00:06:07.562 "bdev_nvme_set_hotplug", 00:06:07.562 "bdev_nvme_set_options", 00:06:07.562 "bdev_passthru_delete", 00:06:07.562 "bdev_passthru_create", 00:06:07.562 "bdev_lvol_set_parent_bdev", 00:06:07.562 "bdev_lvol_set_parent", 00:06:07.562 "bdev_lvol_check_shallow_copy", 00:06:07.562 "bdev_lvol_start_shallow_copy", 00:06:07.562 "bdev_lvol_grow_lvstore", 00:06:07.562 "bdev_lvol_get_lvols", 00:06:07.562 "bdev_lvol_get_lvstores", 00:06:07.562 "bdev_lvol_delete", 00:06:07.562 "bdev_lvol_set_read_only", 00:06:07.562 "bdev_lvol_resize", 00:06:07.562 "bdev_lvol_decouple_parent", 00:06:07.562 "bdev_lvol_inflate", 00:06:07.562 "bdev_lvol_rename", 00:06:07.563 "bdev_lvol_clone_bdev", 00:06:07.563 "bdev_lvol_clone", 00:06:07.563 "bdev_lvol_snapshot", 00:06:07.563 "bdev_lvol_create", 00:06:07.563 "bdev_lvol_delete_lvstore", 00:06:07.563 "bdev_lvol_rename_lvstore", 00:06:07.563 "bdev_lvol_create_lvstore", 00:06:07.563 "bdev_raid_set_options", 00:06:07.563 "bdev_raid_remove_base_bdev", 00:06:07.563 "bdev_raid_add_base_bdev", 00:06:07.563 "bdev_raid_delete", 00:06:07.563 "bdev_raid_create", 00:06:07.563 "bdev_raid_get_bdevs", 00:06:07.563 "bdev_error_inject_error", 00:06:07.563 "bdev_error_delete", 00:06:07.563 "bdev_error_create", 00:06:07.563 "bdev_split_delete", 00:06:07.563 "bdev_split_create", 00:06:07.563 "bdev_delay_delete", 00:06:07.563 "bdev_delay_create", 00:06:07.563 "bdev_delay_update_latency", 00:06:07.563 "bdev_zone_block_delete", 00:06:07.563 "bdev_zone_block_create", 00:06:07.563 "blobfs_create", 00:06:07.563 "blobfs_detect", 00:06:07.563 "blobfs_set_cache_size", 00:06:07.563 "bdev_xnvme_delete", 00:06:07.563 "bdev_xnvme_create", 00:06:07.563 "bdev_aio_delete", 00:06:07.563 "bdev_aio_rescan", 00:06:07.563 "bdev_aio_create", 00:06:07.563 "bdev_ftl_set_property", 00:06:07.563 "bdev_ftl_get_properties", 00:06:07.563 "bdev_ftl_get_stats", 00:06:07.563 "bdev_ftl_unmap", 00:06:07.563 "bdev_ftl_unload", 00:06:07.563 "bdev_ftl_delete", 00:06:07.563 "bdev_ftl_load", 00:06:07.563 "bdev_ftl_create", 00:06:07.563 "bdev_virtio_attach_controller", 00:06:07.563 "bdev_virtio_scsi_get_devices", 00:06:07.563 "bdev_virtio_detach_controller", 00:06:07.563 "bdev_virtio_blk_set_hotplug", 00:06:07.563 "bdev_iscsi_delete", 00:06:07.563 "bdev_iscsi_create", 00:06:07.563 "bdev_iscsi_set_options", 00:06:07.563 "accel_error_inject_error", 00:06:07.563 "ioat_scan_accel_module", 00:06:07.563 "dsa_scan_accel_module", 00:06:07.563 "iaa_scan_accel_module", 00:06:07.563 "keyring_file_remove_key", 00:06:07.563 "keyring_file_add_key", 00:06:07.563 "keyring_linux_set_options", 00:06:07.563 "fsdev_aio_delete", 00:06:07.563 "fsdev_aio_create", 00:06:07.563 "iscsi_get_histogram", 00:06:07.563 "iscsi_enable_histogram", 00:06:07.563 "iscsi_set_options", 00:06:07.563 "iscsi_get_auth_groups", 00:06:07.563 "iscsi_auth_group_remove_secret", 00:06:07.563 "iscsi_auth_group_add_secret", 00:06:07.563 "iscsi_delete_auth_group", 00:06:07.563 "iscsi_create_auth_group", 00:06:07.563 "iscsi_set_discovery_auth", 00:06:07.563 "iscsi_get_options", 00:06:07.563 "iscsi_target_node_request_logout", 00:06:07.563 "iscsi_target_node_set_redirect", 00:06:07.563 "iscsi_target_node_set_auth", 00:06:07.563 "iscsi_target_node_add_lun", 00:06:07.563 "iscsi_get_stats", 00:06:07.563 "iscsi_get_connections", 00:06:07.563 "iscsi_portal_group_set_auth", 00:06:07.563 "iscsi_start_portal_group", 00:06:07.563 "iscsi_delete_portal_group", 00:06:07.563 "iscsi_create_portal_group", 00:06:07.563 "iscsi_get_portal_groups", 00:06:07.563 "iscsi_delete_target_node", 00:06:07.563 "iscsi_target_node_remove_pg_ig_maps", 00:06:07.563 "iscsi_target_node_add_pg_ig_maps", 00:06:07.563 "iscsi_create_target_node", 00:06:07.563 "iscsi_get_target_nodes", 00:06:07.563 "iscsi_delete_initiator_group", 00:06:07.563 "iscsi_initiator_group_remove_initiators", 00:06:07.563 "iscsi_initiator_group_add_initiators", 00:06:07.563 "iscsi_create_initiator_group", 00:06:07.563 "iscsi_get_initiator_groups", 00:06:07.563 "nvmf_set_crdt", 00:06:07.563 "nvmf_set_config", 00:06:07.563 "nvmf_set_max_subsystems", 00:06:07.563 "nvmf_stop_mdns_prr", 00:06:07.563 "nvmf_publish_mdns_prr", 00:06:07.563 "nvmf_subsystem_get_listeners", 00:06:07.563 "nvmf_subsystem_get_qpairs", 00:06:07.563 "nvmf_subsystem_get_controllers", 00:06:07.563 "nvmf_get_stats", 00:06:07.563 "nvmf_get_transports", 00:06:07.563 "nvmf_create_transport", 00:06:07.563 "nvmf_get_targets", 00:06:07.563 "nvmf_delete_target", 00:06:07.563 "nvmf_create_target", 00:06:07.563 "nvmf_subsystem_allow_any_host", 00:06:07.563 "nvmf_subsystem_set_keys", 00:06:07.563 "nvmf_subsystem_remove_host", 00:06:07.563 "nvmf_subsystem_add_host", 00:06:07.563 "nvmf_ns_remove_host", 00:06:07.563 "nvmf_ns_add_host", 00:06:07.563 "nvmf_subsystem_remove_ns", 00:06:07.563 "nvmf_subsystem_set_ns_ana_group", 00:06:07.563 "nvmf_subsystem_add_ns", 00:06:07.563 "nvmf_subsystem_listener_set_ana_state", 00:06:07.563 "nvmf_discovery_get_referrals", 00:06:07.563 "nvmf_discovery_remove_referral", 00:06:07.563 "nvmf_discovery_add_referral", 00:06:07.563 "nvmf_subsystem_remove_listener", 00:06:07.563 "nvmf_subsystem_add_listener", 00:06:07.563 "nvmf_delete_subsystem", 00:06:07.563 "nvmf_create_subsystem", 00:06:07.563 "nvmf_get_subsystems", 00:06:07.563 "env_dpdk_get_mem_stats", 00:06:07.563 "nbd_get_disks", 00:06:07.563 "nbd_stop_disk", 00:06:07.563 "nbd_start_disk", 00:06:07.563 "ublk_recover_disk", 00:06:07.563 "ublk_get_disks", 00:06:07.563 "ublk_stop_disk", 00:06:07.563 "ublk_start_disk", 00:06:07.563 "ublk_destroy_target", 00:06:07.563 "ublk_create_target", 00:06:07.563 "virtio_blk_create_transport", 00:06:07.563 "virtio_blk_get_transports", 00:06:07.563 "vhost_controller_set_coalescing", 00:06:07.563 "vhost_get_controllers", 00:06:07.563 "vhost_delete_controller", 00:06:07.563 "vhost_create_blk_controller", 00:06:07.563 "vhost_scsi_controller_remove_target", 00:06:07.563 "vhost_scsi_controller_add_target", 00:06:07.563 "vhost_start_scsi_controller", 00:06:07.563 "vhost_create_scsi_controller", 00:06:07.563 "thread_set_cpumask", 00:06:07.563 "scheduler_set_options", 00:06:07.563 "framework_get_governor", 00:06:07.563 "framework_get_scheduler", 00:06:07.563 "framework_set_scheduler", 00:06:07.563 "framework_get_reactors", 00:06:07.563 "thread_get_io_channels", 00:06:07.563 "thread_get_pollers", 00:06:07.563 "thread_get_stats", 00:06:07.563 "framework_monitor_context_switch", 00:06:07.563 "spdk_kill_instance", 00:06:07.563 "log_enable_timestamps", 00:06:07.563 "log_get_flags", 00:06:07.563 "log_clear_flag", 00:06:07.563 "log_set_flag", 00:06:07.563 "log_get_level", 00:06:07.563 "log_set_level", 00:06:07.563 "log_get_print_level", 00:06:07.563 "log_set_print_level", 00:06:07.563 "framework_enable_cpumask_locks", 00:06:07.563 "framework_disable_cpumask_locks", 00:06:07.563 "framework_wait_init", 00:06:07.563 "framework_start_init", 00:06:07.563 "scsi_get_devices", 00:06:07.563 "bdev_get_histogram", 00:06:07.563 "bdev_enable_histogram", 00:06:07.563 "bdev_set_qos_limit", 00:06:07.563 "bdev_set_qd_sampling_period", 00:06:07.563 "bdev_get_bdevs", 00:06:07.563 "bdev_reset_iostat", 00:06:07.563 "bdev_get_iostat", 00:06:07.563 "bdev_examine", 00:06:07.563 "bdev_wait_for_examine", 00:06:07.563 "bdev_set_options", 00:06:07.563 "accel_get_stats", 00:06:07.563 "accel_set_options", 00:06:07.563 "accel_set_driver", 00:06:07.563 "accel_crypto_key_destroy", 00:06:07.563 "accel_crypto_keys_get", 00:06:07.563 "accel_crypto_key_create", 00:06:07.563 "accel_assign_opc", 00:06:07.563 "accel_get_module_info", 00:06:07.563 "accel_get_opc_assignments", 00:06:07.563 "vmd_rescan", 00:06:07.563 "vmd_remove_device", 00:06:07.563 "vmd_enable", 00:06:07.563 "sock_get_default_impl", 00:06:07.563 "sock_set_default_impl", 00:06:07.563 "sock_impl_set_options", 00:06:07.563 "sock_impl_get_options", 00:06:07.563 "iobuf_get_stats", 00:06:07.563 "iobuf_set_options", 00:06:07.563 "keyring_get_keys", 00:06:07.563 "framework_get_pci_devices", 00:06:07.563 "framework_get_config", 00:06:07.563 "framework_get_subsystems", 00:06:07.563 "fsdev_set_opts", 00:06:07.563 "fsdev_get_opts", 00:06:07.563 "trace_get_info", 00:06:07.563 "trace_get_tpoint_group_mask", 00:06:07.563 "trace_disable_tpoint_group", 00:06:07.563 "trace_enable_tpoint_group", 00:06:07.563 "trace_clear_tpoint_mask", 00:06:07.563 "trace_set_tpoint_mask", 00:06:07.563 "notify_get_notifications", 00:06:07.563 "notify_get_types", 00:06:07.563 "spdk_get_version", 00:06:07.563 "rpc_get_methods" 00:06:07.563 ] 00:06:07.563 05:53:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.563 05:53:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:07.563 05:53:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71115 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71115 ']' 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71115 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71115 00:06:07.563 killing process with pid 71115 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71115' 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71115 00:06:07.563 05:53:30 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71115 00:06:07.822 ************************************ 00:06:07.822 END TEST spdkcli_tcp 00:06:07.822 ************************************ 00:06:07.822 00:06:07.822 real 0m1.302s 00:06:07.822 user 0m2.149s 00:06:07.822 sys 0m0.423s 00:06:07.822 05:53:30 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.822 05:53:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.822 05:53:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.822 05:53:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.822 05:53:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.822 05:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.822 ************************************ 00:06:07.822 START TEST dpdk_mem_utility 00:06:07.822 ************************************ 00:06:07.822 05:53:30 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.081 * Looking for test storage... 00:06:08.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.081 05:53:30 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.081 05:53:30 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.081 05:53:30 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.081 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.081 05:53:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.082 05:53:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.082 --rc genhtml_branch_coverage=1 00:06:08.082 --rc genhtml_function_coverage=1 00:06:08.082 --rc genhtml_legend=1 00:06:08.082 --rc geninfo_all_blocks=1 00:06:08.082 --rc geninfo_unexecuted_blocks=1 00:06:08.082 00:06:08.082 ' 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.082 --rc genhtml_branch_coverage=1 00:06:08.082 --rc genhtml_function_coverage=1 00:06:08.082 --rc genhtml_legend=1 00:06:08.082 --rc geninfo_all_blocks=1 00:06:08.082 --rc geninfo_unexecuted_blocks=1 00:06:08.082 00:06:08.082 ' 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.082 --rc genhtml_branch_coverage=1 00:06:08.082 --rc genhtml_function_coverage=1 00:06:08.082 --rc genhtml_legend=1 00:06:08.082 --rc geninfo_all_blocks=1 00:06:08.082 --rc geninfo_unexecuted_blocks=1 00:06:08.082 00:06:08.082 ' 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.082 --rc genhtml_branch_coverage=1 00:06:08.082 --rc genhtml_function_coverage=1 00:06:08.082 --rc genhtml_legend=1 00:06:08.082 --rc geninfo_all_blocks=1 00:06:08.082 --rc geninfo_unexecuted_blocks=1 00:06:08.082 00:06:08.082 ' 00:06:08.082 05:53:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.082 05:53:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71202 00:06:08.082 05:53:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.082 05:53:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71202 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71202 ']' 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.082 05:53:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.340 [2024-12-08 05:53:31.166971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.340 [2024-12-08 05:53:31.167793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71202 ] 00:06:08.340 [2024-12-08 05:53:31.318084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.340 [2024-12-08 05:53:31.351388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.275 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.275 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:09.275 05:53:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.275 05:53:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.275 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.275 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.275 { 00:06:09.275 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.275 } 00:06:09.275 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.275 05:53:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.275 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:09.275 1 heaps totaling size 860.000000 MiB 00:06:09.275 size: 860.000000 MiB heap id: 0 00:06:09.275 end heaps---------- 00:06:09.275 9 mempools totaling size 642.649841 MiB 00:06:09.275 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.275 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.275 size: 92.545471 MiB name: bdev_io_71202 00:06:09.275 size: 51.011292 MiB name: evtpool_71202 00:06:09.275 size: 50.003479 MiB name: msgpool_71202 00:06:09.275 size: 36.509338 MiB name: fsdev_io_71202 00:06:09.275 size: 21.763794 MiB name: PDU_Pool 00:06:09.275 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.275 size: 0.026123 MiB name: Session_Pool 00:06:09.275 end mempools------- 00:06:09.275 6 memzones totaling size 4.142822 MiB 00:06:09.275 size: 1.000366 MiB name: RG_ring_0_71202 00:06:09.275 size: 1.000366 MiB name: RG_ring_1_71202 00:06:09.275 size: 1.000366 MiB name: RG_ring_4_71202 00:06:09.275 size: 1.000366 MiB name: RG_ring_5_71202 00:06:09.275 size: 0.125366 MiB name: RG_ring_2_71202 00:06:09.275 size: 0.015991 MiB name: RG_ring_3_71202 00:06:09.275 end memzones------- 00:06:09.275 05:53:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.275 heap id: 0 total size: 860.000000 MiB number of busy elements: 309 number of free elements: 16 00:06:09.275 list of free elements. size: 13.936157 MiB 00:06:09.275 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:09.275 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:09.275 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:09.275 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:09.275 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:09.275 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:09.275 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:09.275 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:09.275 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:09.275 element at address: 0x20001d800000 with size: 0.567688 MiB 00:06:09.275 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:09.275 element at address: 0x200003e00000 with size: 0.488281 MiB 00:06:09.275 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:09.275 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:09.275 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:09.275 element at address: 0x200003a00000 with size: 0.352844 MiB 00:06:09.275 list of standard malloc elements. size: 199.267151 MiB 00:06:09.275 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:09.275 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:09.275 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:09.275 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:09.275 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:09.275 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:09.275 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:09.275 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.275 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:09.275 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:09.275 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:09.275 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:09.275 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:09.276 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:09.276 list of memzone associated elements. size: 646.796692 MiB 00:06:09.276 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:09.276 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.276 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:09.276 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.276 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:09.276 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71202_0 00:06:09.276 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:09.276 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71202_0 00:06:09.276 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:09.276 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71202_0 00:06:09.276 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:09.276 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71202_0 00:06:09.276 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:09.276 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.276 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:09.276 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.276 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:09.276 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71202 00:06:09.276 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:09.276 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71202 00:06:09.276 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:09.276 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71202 00:06:09.276 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:09.276 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.276 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:09.276 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.276 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:09.276 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.276 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:09.276 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.276 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:09.276 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71202 00:06:09.276 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:09.276 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71202 00:06:09.276 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:09.276 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71202 00:06:09.276 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:09.276 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71202 00:06:09.276 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:09.276 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71202 00:06:09.276 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:09.276 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71202 00:06:09.276 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:09.276 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.276 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:09.276 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.276 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:09.276 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.276 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:06:09.276 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71202 00:06:09.276 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:09.276 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.276 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:09.276 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.276 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:06:09.276 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71202 00:06:09.276 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:09.276 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.276 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:09.276 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71202 00:06:09.276 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:09.276 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71202 00:06:09.276 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:06:09.276 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71202 00:06:09.276 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:09.276 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.276 05:53:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.276 05:53:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71202 00:06:09.276 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71202 ']' 00:06:09.276 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71202 00:06:09.276 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:09.276 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.276 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71202 00:06:09.534 killing process with pid 71202 00:06:09.534 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.534 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.534 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71202' 00:06:09.534 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71202 00:06:09.534 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71202 00:06:09.791 00:06:09.791 real 0m1.757s 00:06:09.791 user 0m1.930s 00:06:09.791 sys 0m0.450s 00:06:09.791 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.791 05:53:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.791 ************************************ 00:06:09.791 END TEST dpdk_mem_utility 00:06:09.791 ************************************ 00:06:09.791 05:53:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.791 05:53:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.791 05:53:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.791 05:53:32 -- common/autotest_common.sh@10 -- # set +x 00:06:09.791 ************************************ 00:06:09.791 START TEST event 00:06:09.791 ************************************ 00:06:09.791 05:53:32 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.791 * Looking for test storage... 00:06:09.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.791 05:53:32 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.791 05:53:32 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.791 05:53:32 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.050 05:53:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.050 05:53:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.050 05:53:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.050 05:53:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.050 05:53:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.050 05:53:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.050 05:53:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.050 05:53:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.050 05:53:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.050 05:53:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.050 05:53:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.050 05:53:32 event -- scripts/common.sh@344 -- # case "$op" in 00:06:10.050 05:53:32 event -- scripts/common.sh@345 -- # : 1 00:06:10.050 05:53:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.050 05:53:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.050 05:53:32 event -- scripts/common.sh@365 -- # decimal 1 00:06:10.050 05:53:32 event -- scripts/common.sh@353 -- # local d=1 00:06:10.050 05:53:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.050 05:53:32 event -- scripts/common.sh@355 -- # echo 1 00:06:10.050 05:53:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.050 05:53:32 event -- scripts/common.sh@366 -- # decimal 2 00:06:10.050 05:53:32 event -- scripts/common.sh@353 -- # local d=2 00:06:10.050 05:53:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.050 05:53:32 event -- scripts/common.sh@355 -- # echo 2 00:06:10.050 05:53:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.050 05:53:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.050 05:53:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.050 05:53:32 event -- scripts/common.sh@368 -- # return 0 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.050 --rc genhtml_branch_coverage=1 00:06:10.050 --rc genhtml_function_coverage=1 00:06:10.050 --rc genhtml_legend=1 00:06:10.050 --rc geninfo_all_blocks=1 00:06:10.050 --rc geninfo_unexecuted_blocks=1 00:06:10.050 00:06:10.050 ' 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.050 --rc genhtml_branch_coverage=1 00:06:10.050 --rc genhtml_function_coverage=1 00:06:10.050 --rc genhtml_legend=1 00:06:10.050 --rc geninfo_all_blocks=1 00:06:10.050 --rc geninfo_unexecuted_blocks=1 00:06:10.050 00:06:10.050 ' 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.050 --rc genhtml_branch_coverage=1 00:06:10.050 --rc genhtml_function_coverage=1 00:06:10.050 --rc genhtml_legend=1 00:06:10.050 --rc geninfo_all_blocks=1 00:06:10.050 --rc geninfo_unexecuted_blocks=1 00:06:10.050 00:06:10.050 ' 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.050 --rc genhtml_branch_coverage=1 00:06:10.050 --rc genhtml_function_coverage=1 00:06:10.050 --rc genhtml_legend=1 00:06:10.050 --rc geninfo_all_blocks=1 00:06:10.050 --rc geninfo_unexecuted_blocks=1 00:06:10.050 00:06:10.050 ' 00:06:10.050 05:53:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.050 05:53:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.050 05:53:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:10.050 05:53:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.050 05:53:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.050 ************************************ 00:06:10.050 START TEST event_perf 00:06:10.050 ************************************ 00:06:10.050 05:53:32 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.050 Running I/O for 1 seconds...[2024-12-08 05:53:32.907062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.050 [2024-12-08 05:53:32.907415] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:06:10.050 [2024-12-08 05:53:33.050148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.050 [2024-12-08 05:53:33.083927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.050 [2024-12-08 05:53:33.084091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.050 [2024-12-08 05:53:33.084238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.050 Running I/O for 1 seconds...[2024-12-08 05:53:33.084127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.423 00:06:11.423 lcore 0: 196245 00:06:11.423 lcore 1: 196244 00:06:11.423 lcore 2: 196245 00:06:11.423 lcore 3: 196245 00:06:11.423 done. 00:06:11.423 00:06:11.423 real 0m1.287s 00:06:11.423 user 0m4.075s 00:06:11.423 sys 0m0.091s 00:06:11.423 05:53:34 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.423 05:53:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.423 ************************************ 00:06:11.423 END TEST event_perf 00:06:11.423 ************************************ 00:06:11.423 05:53:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.423 05:53:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:11.423 05:53:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.423 05:53:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.423 ************************************ 00:06:11.423 START TEST event_reactor 00:06:11.423 ************************************ 00:06:11.423 05:53:34 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.423 [2024-12-08 05:53:34.237549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.423 [2024-12-08 05:53:34.237849] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71322 ] 00:06:11.423 [2024-12-08 05:53:34.376746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.423 [2024-12-08 05:53:34.408139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.800 test_start 00:06:12.800 oneshot 00:06:12.800 tick 100 00:06:12.800 tick 100 00:06:12.800 tick 250 00:06:12.800 tick 100 00:06:12.800 tick 100 00:06:12.800 tick 100 00:06:12.800 tick 250 00:06:12.800 tick 500 00:06:12.800 tick 100 00:06:12.800 tick 100 00:06:12.800 tick 250 00:06:12.800 tick 100 00:06:12.800 tick 100 00:06:12.800 test_end 00:06:12.800 00:06:12.800 real 0m1.262s 00:06:12.800 user 0m1.095s 00:06:12.800 sys 0m0.059s 00:06:12.800 05:53:35 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.800 ************************************ 00:06:12.800 END TEST event_reactor 00:06:12.800 05:53:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 ************************************ 00:06:12.800 05:53:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.800 05:53:35 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:12.800 05:53:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.800 05:53:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 ************************************ 00:06:12.800 START TEST event_reactor_perf 00:06:12.800 ************************************ 00:06:12.800 05:53:35 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.800 [2024-12-08 05:53:35.557588] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:12.800 [2024-12-08 05:53:35.557785] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71364 ] 00:06:12.800 [2024-12-08 05:53:35.702682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.800 [2024-12-08 05:53:35.738177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.229 test_start 00:06:14.229 test_end 00:06:14.229 Performance: 303107 events per second 00:06:14.229 00:06:14.229 real 0m1.293s 00:06:14.229 user 0m1.111s 00:06:14.229 sys 0m0.075s 00:06:14.229 ************************************ 00:06:14.229 END TEST event_reactor_perf 00:06:14.229 ************************************ 00:06:14.229 05:53:36 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.229 05:53:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.229 05:53:36 event -- event/event.sh@49 -- # uname -s 00:06:14.229 05:53:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.229 05:53:36 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.229 05:53:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.229 05:53:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.229 05:53:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.229 ************************************ 00:06:14.229 START TEST event_scheduler 00:06:14.229 ************************************ 00:06:14.229 05:53:36 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.229 * Looking for test storage... 00:06:14.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.229 05:53:36 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.229 05:53:36 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.229 05:53:36 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.229 05:53:37 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.229 05:53:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.230 05:53:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.230 --rc genhtml_branch_coverage=1 00:06:14.230 --rc genhtml_function_coverage=1 00:06:14.230 --rc genhtml_legend=1 00:06:14.230 --rc geninfo_all_blocks=1 00:06:14.230 --rc geninfo_unexecuted_blocks=1 00:06:14.230 00:06:14.230 ' 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.230 --rc genhtml_branch_coverage=1 00:06:14.230 --rc genhtml_function_coverage=1 00:06:14.230 --rc genhtml_legend=1 00:06:14.230 --rc geninfo_all_blocks=1 00:06:14.230 --rc geninfo_unexecuted_blocks=1 00:06:14.230 00:06:14.230 ' 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.230 --rc genhtml_branch_coverage=1 00:06:14.230 --rc genhtml_function_coverage=1 00:06:14.230 --rc genhtml_legend=1 00:06:14.230 --rc geninfo_all_blocks=1 00:06:14.230 --rc geninfo_unexecuted_blocks=1 00:06:14.230 00:06:14.230 ' 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.230 --rc genhtml_branch_coverage=1 00:06:14.230 --rc genhtml_function_coverage=1 00:06:14.230 --rc genhtml_legend=1 00:06:14.230 --rc geninfo_all_blocks=1 00:06:14.230 --rc geninfo_unexecuted_blocks=1 00:06:14.230 00:06:14.230 ' 00:06:14.230 05:53:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.230 05:53:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71429 00:06:14.230 05:53:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.230 05:53:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71429 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71429 ']' 00:06:14.230 05:53:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.230 05:53:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.230 [2024-12-08 05:53:37.152055] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.230 [2024-12-08 05:53:37.152812] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71429 ] 00:06:14.489 [2024-12-08 05:53:37.303847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.489 [2024-12-08 05:53:37.354251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.489 [2024-12-08 05:53:37.354541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.489 [2024-12-08 05:53:37.354591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.489 [2024-12-08 05:53:37.354423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:15.427 05:53:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.427 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.427 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.427 POWER: Cannot set governor of lcore 0 to performance 00:06:15.427 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.427 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.427 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.427 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.427 [2024-12-08 05:53:38.181148] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.427 [2024-12-08 05:53:38.181222] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:15.427 [2024-12-08 05:53:38.181248] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.427 [2024-12-08 05:53:38.181285] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.427 [2024-12-08 05:53:38.181312] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.427 [2024-12-08 05:53:38.181326] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 [2024-12-08 05:53:38.236422] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 ************************************ 00:06:15.427 START TEST scheduler_create_thread 00:06:15.427 ************************************ 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 2 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 3 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 4 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 5 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 6 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 7 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 8 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 9 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 10 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.427 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.996 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.996 05:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.996 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.996 05:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.369 05:53:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.369 05:53:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.369 05:53:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.369 05:53:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.369 05:53:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.304 ************************************ 00:06:18.304 END TEST scheduler_create_thread 00:06:18.304 ************************************ 00:06:18.304 05:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.304 00:06:18.304 real 0m3.091s 00:06:18.304 user 0m0.019s 00:06:18.304 sys 0m0.006s 00:06:18.304 05:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.304 05:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.563 05:53:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.563 05:53:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71429 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71429 ']' 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71429 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71429 00:06:18.563 killing process with pid 71429 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71429' 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71429 00:06:18.563 05:53:41 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71429 00:06:18.822 [2024-12-08 05:53:41.720770] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:19.080 00:06:19.080 real 0m5.082s 00:06:19.080 user 0m10.011s 00:06:19.080 sys 0m0.424s 00:06:19.080 05:53:41 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.080 ************************************ 00:06:19.080 END TEST event_scheduler 00:06:19.080 ************************************ 00:06:19.080 05:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 05:53:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.080 05:53:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.080 05:53:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.080 05:53:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.080 05:53:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 ************************************ 00:06:19.080 START TEST app_repeat 00:06:19.080 ************************************ 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.080 Process app_repeat pid: 71535 00:06:19.080 spdk_app_start Round 0 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71535 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71535' 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.080 05:53:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71535 /var/tmp/spdk-nbd.sock 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71535 ']' 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.080 05:53:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.080 [2024-12-08 05:53:42.064218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:19.080 [2024-12-08 05:53:42.064664] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71535 ] 00:06:19.339 [2024-12-08 05:53:42.215488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.339 [2024-12-08 05:53:42.260739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.339 [2024-12-08 05:53:42.260788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.275 05:53:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.275 05:53:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.275 05:53:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.533 Malloc0 00:06:20.533 05:53:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.793 Malloc1 00:06:20.793 05:53:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.793 05:53:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.052 /dev/nbd0 00:06:21.052 05:53:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.052 05:53:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.052 1+0 records in 00:06:21.052 1+0 records out 00:06:21.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368025 s, 11.1 MB/s 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.052 05:53:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.052 05:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.052 05:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.052 05:53:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.312 /dev/nbd1 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.312 1+0 records in 00:06:21.312 1+0 records out 00:06:21.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388336 s, 10.5 MB/s 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.312 05:53:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.312 05:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.571 05:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.571 { 00:06:21.571 "nbd_device": "/dev/nbd0", 00:06:21.571 "bdev_name": "Malloc0" 00:06:21.571 }, 00:06:21.571 { 00:06:21.571 "nbd_device": "/dev/nbd1", 00:06:21.571 "bdev_name": "Malloc1" 00:06:21.571 } 00:06:21.571 ]' 00:06:21.571 05:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.571 { 00:06:21.571 "nbd_device": "/dev/nbd0", 00:06:21.571 "bdev_name": "Malloc0" 00:06:21.571 }, 00:06:21.571 { 00:06:21.571 "nbd_device": "/dev/nbd1", 00:06:21.571 "bdev_name": "Malloc1" 00:06:21.571 } 00:06:21.571 ]' 00:06:21.571 05:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.830 /dev/nbd1' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.830 /dev/nbd1' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.830 256+0 records in 00:06:21.830 256+0 records out 00:06:21.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106492 s, 98.5 MB/s 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.830 256+0 records in 00:06:21.830 256+0 records out 00:06:21.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245479 s, 42.7 MB/s 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.830 256+0 records in 00:06:21.830 256+0 records out 00:06:21.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272576 s, 38.5 MB/s 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.830 05:53:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.089 05:53:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.347 05:53:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.348 05:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.606 05:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.606 05:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.606 05:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.864 05:53:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.864 05:53:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.123 05:53:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.124 [2024-12-08 05:53:46.127561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.124 [2024-12-08 05:53:46.159865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.124 [2024-12-08 05:53:46.159875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.382 [2024-12-08 05:53:46.193163] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.382 [2024-12-08 05:53:46.193298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.674 spdk_app_start Round 1 00:06:26.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.674 05:53:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.674 05:53:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.674 05:53:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71535 /var/tmp/spdk-nbd.sock 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71535 ']' 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.674 05:53:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:26.674 05:53:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.674 Malloc0 00:06:26.674 05:53:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.933 Malloc1 00:06:26.933 05:53:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.933 05:53:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.193 /dev/nbd0 00:06:27.193 05:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.193 05:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.193 1+0 records in 00:06:27.193 1+0 records out 00:06:27.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689763 s, 5.9 MB/s 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.193 05:53:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.193 05:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.193 05:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.193 05:53:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.452 /dev/nbd1 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.452 1+0 records in 00:06:27.452 1+0 records out 00:06:27.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272668 s, 15.0 MB/s 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.452 05:53:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.452 05:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.712 { 00:06:27.712 "nbd_device": "/dev/nbd0", 00:06:27.712 "bdev_name": "Malloc0" 00:06:27.712 }, 00:06:27.712 { 00:06:27.712 "nbd_device": "/dev/nbd1", 00:06:27.712 "bdev_name": "Malloc1" 00:06:27.712 } 00:06:27.712 ]' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.712 { 00:06:27.712 "nbd_device": "/dev/nbd0", 00:06:27.712 "bdev_name": "Malloc0" 00:06:27.712 }, 00:06:27.712 { 00:06:27.712 "nbd_device": "/dev/nbd1", 00:06:27.712 "bdev_name": "Malloc1" 00:06:27.712 } 00:06:27.712 ]' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.712 /dev/nbd1' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.712 /dev/nbd1' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.712 256+0 records in 00:06:27.712 256+0 records out 00:06:27.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010565 s, 99.3 MB/s 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.712 05:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.712 256+0 records in 00:06:27.712 256+0 records out 00:06:27.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232467 s, 45.1 MB/s 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.972 256+0 records in 00:06:27.972 256+0 records out 00:06:27.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260503 s, 40.3 MB/s 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.972 05:53:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.973 05:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.973 05:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.232 05:53:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.491 05:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.750 05:53:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.750 05:53:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.328 05:53:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.328 [2024-12-08 05:53:52.173221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.328 [2024-12-08 05:53:52.204285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.328 [2024-12-08 05:53:52.204288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.328 [2024-12-08 05:53:52.232893] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.328 [2024-12-08 05:53:52.232974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.660 spdk_app_start Round 2 00:06:32.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.660 05:53:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.660 05:53:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:32.660 05:53:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71535 /var/tmp/spdk-nbd.sock 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71535 ']' 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.660 05:53:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.660 05:53:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.660 Malloc0 00:06:32.660 05:53:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.918 Malloc1 00:06:32.918 05:53:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.918 05:53:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.176 /dev/nbd0 00:06:33.176 05:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.176 05:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.176 1+0 records in 00:06:33.176 1+0 records out 00:06:33.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321567 s, 12.7 MB/s 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.176 05:53:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.176 05:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.176 05:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.176 05:53:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.435 /dev/nbd1 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.435 1+0 records in 00:06:33.435 1+0 records out 00:06:33.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344629 s, 11.9 MB/s 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.435 05:53:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.435 05:53:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.694 05:53:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.694 { 00:06:33.694 "nbd_device": "/dev/nbd0", 00:06:33.694 "bdev_name": "Malloc0" 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "nbd_device": "/dev/nbd1", 00:06:33.694 "bdev_name": "Malloc1" 00:06:33.694 } 00:06:33.694 ]' 00:06:33.694 05:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.694 { 00:06:33.694 "nbd_device": "/dev/nbd0", 00:06:33.694 "bdev_name": "Malloc0" 00:06:33.694 }, 00:06:33.694 { 00:06:33.694 "nbd_device": "/dev/nbd1", 00:06:33.694 "bdev_name": "Malloc1" 00:06:33.694 } 00:06:33.694 ]' 00:06:33.694 05:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.953 /dev/nbd1' 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.953 /dev/nbd1' 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.953 256+0 records in 00:06:33.953 256+0 records out 00:06:33.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107198 s, 97.8 MB/s 00:06:33.953 05:53:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.954 256+0 records in 00:06:33.954 256+0 records out 00:06:33.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203509 s, 51.5 MB/s 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.954 256+0 records in 00:06:33.954 256+0 records out 00:06:33.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249294 s, 42.1 MB/s 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.954 05:53:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.213 05:53:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.471 05:53:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.471 05:53:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.471 05:53:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.471 05:53:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.471 05:53:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.472 05:53:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.472 05:53:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.472 05:53:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.472 05:53:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.472 05:53:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.472 05:53:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.731 05:53:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.731 05:53:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.990 05:53:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.248 [2024-12-08 05:53:58.084205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.248 [2024-12-08 05:53:58.115060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.248 [2024-12-08 05:53:58.115067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.248 [2024-12-08 05:53:58.143945] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.248 [2024-12-08 05:53:58.144023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.536 05:54:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71535 /var/tmp/spdk-nbd.sock 00:06:38.536 05:54:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71535 ']' 00:06:38.536 05:54:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.536 05:54:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.536 05:54:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.536 05:54:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.536 05:54:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:38.536 05:54:01 event.app_repeat -- event/event.sh@39 -- # killprocess 71535 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71535 ']' 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71535 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71535 00:06:38.536 killing process with pid 71535 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71535' 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71535 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71535 00:06:38.536 spdk_app_start is called in Round 0. 00:06:38.536 Shutdown signal received, stop current app iteration 00:06:38.536 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:38.536 spdk_app_start is called in Round 1. 00:06:38.536 Shutdown signal received, stop current app iteration 00:06:38.536 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:38.536 spdk_app_start is called in Round 2. 00:06:38.536 Shutdown signal received, stop current app iteration 00:06:38.536 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:38.536 spdk_app_start is called in Round 3. 00:06:38.536 Shutdown signal received, stop current app iteration 00:06:38.536 05:54:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.536 05:54:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.536 00:06:38.536 real 0m19.479s 00:06:38.536 user 0m44.628s 00:06:38.536 sys 0m2.632s 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.536 ************************************ 00:06:38.536 END TEST app_repeat 00:06:38.536 ************************************ 00:06:38.536 05:54:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.536 05:54:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.536 05:54:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.536 05:54:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.536 05:54:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.536 05:54:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.536 ************************************ 00:06:38.536 START TEST cpu_locks 00:06:38.536 ************************************ 00:06:38.536 05:54:01 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.794 * Looking for test storage... 00:06:38.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.794 05:54:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.794 --rc genhtml_branch_coverage=1 00:06:38.794 --rc genhtml_function_coverage=1 00:06:38.794 --rc genhtml_legend=1 00:06:38.794 --rc geninfo_all_blocks=1 00:06:38.794 --rc geninfo_unexecuted_blocks=1 00:06:38.794 00:06:38.794 ' 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.794 --rc genhtml_branch_coverage=1 00:06:38.794 --rc genhtml_function_coverage=1 00:06:38.794 --rc genhtml_legend=1 00:06:38.794 --rc geninfo_all_blocks=1 00:06:38.794 --rc geninfo_unexecuted_blocks=1 00:06:38.794 00:06:38.794 ' 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.794 --rc genhtml_branch_coverage=1 00:06:38.794 --rc genhtml_function_coverage=1 00:06:38.794 --rc genhtml_legend=1 00:06:38.794 --rc geninfo_all_blocks=1 00:06:38.794 --rc geninfo_unexecuted_blocks=1 00:06:38.794 00:06:38.794 ' 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.794 --rc genhtml_branch_coverage=1 00:06:38.794 --rc genhtml_function_coverage=1 00:06:38.794 --rc genhtml_legend=1 00:06:38.794 --rc geninfo_all_blocks=1 00:06:38.794 --rc geninfo_unexecuted_blocks=1 00:06:38.794 00:06:38.794 ' 00:06:38.794 05:54:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.794 05:54:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.794 05:54:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.794 05:54:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.794 05:54:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.794 ************************************ 00:06:38.794 START TEST default_locks 00:06:38.794 ************************************ 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71987 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71987 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71987 ']' 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.795 05:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.053 [2024-12-08 05:54:01.864248] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.053 [2024-12-08 05:54:01.864481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:06:39.053 [2024-12-08 05:54:02.013829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.053 [2024-12-08 05:54:02.047866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.992 05:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.992 05:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:39.992 05:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71987 00:06:39.992 05:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71987 00:06:39.992 05:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71987 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71987 ']' 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71987 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71987 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.252 killing process with pid 71987 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71987' 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71987 00:06:40.252 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71987 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71987 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71987 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71987 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71987 ']' 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.821 ERROR: process (pid: 71987) is no longer running 00:06:40.821 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71987) - No such process 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.821 00:06:40.821 real 0m1.876s 00:06:40.821 user 0m2.079s 00:06:40.821 sys 0m0.552s 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.821 05:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.821 ************************************ 00:06:40.821 END TEST default_locks 00:06:40.821 ************************************ 00:06:40.821 05:54:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.821 05:54:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.821 05:54:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.821 05:54:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.821 ************************************ 00:06:40.821 START TEST default_locks_via_rpc 00:06:40.821 ************************************ 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72040 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72040 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72040 ']' 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.821 05:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.821 [2024-12-08 05:54:03.791707] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:40.821 [2024-12-08 05:54:03.791909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72040 ] 00:06:41.080 [2024-12-08 05:54:03.940139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.080 [2024-12-08 05:54:03.978165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72040 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72040 00:06:42.015 05:54:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72040 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72040 ']' 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72040 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72040 00:06:42.273 killing process with pid 72040 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72040' 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72040 00:06:42.273 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72040 00:06:42.530 00:06:42.530 real 0m1.834s 00:06:42.530 user 0m2.065s 00:06:42.530 sys 0m0.495s 00:06:42.530 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.530 05:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.530 ************************************ 00:06:42.530 END TEST default_locks_via_rpc 00:06:42.530 ************************************ 00:06:42.530 05:54:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.530 05:54:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.530 05:54:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.530 05:54:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.530 ************************************ 00:06:42.530 START TEST non_locking_app_on_locked_coremask 00:06:42.530 ************************************ 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72093 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72093 /var/tmp/spdk.sock 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72093 ']' 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.530 05:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.788 [2024-12-08 05:54:05.661945] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.788 [2024-12-08 05:54:05.662152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72093 ] 00:06:42.788 [2024-12-08 05:54:05.803229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.046 [2024-12-08 05:54:05.840238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72109 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72109 /var/tmp/spdk2.sock 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72109 ']' 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.613 05:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.871 [2024-12-08 05:54:06.721338] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:43.871 [2024-12-08 05:54:06.722007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72109 ] 00:06:43.871 [2024-12-08 05:54:06.877279] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.871 [2024-12-08 05:54:06.877401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.130 [2024-12-08 05:54:06.948783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.703 05:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.703 05:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.703 05:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72093 00:06:44.703 05:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72093 00:06:44.703 05:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72093 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72093 ']' 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72093 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72093 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.637 killing process with pid 72093 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72093' 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72093 00:06:45.637 05:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72093 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72109 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72109 ']' 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72109 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72109 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.212 killing process with pid 72109 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72109' 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72109 00:06:46.212 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72109 00:06:46.780 00:06:46.780 real 0m3.964s 00:06:46.780 user 0m4.539s 00:06:46.780 sys 0m1.124s 00:06:46.780 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.780 05:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.780 ************************************ 00:06:46.780 END TEST non_locking_app_on_locked_coremask 00:06:46.780 ************************************ 00:06:46.780 05:54:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:46.780 05:54:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.780 05:54:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.780 05:54:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.780 ************************************ 00:06:46.780 START TEST locking_app_on_unlocked_coremask 00:06:46.780 ************************************ 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72174 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72174 /var/tmp/spdk.sock 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72174 ']' 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.780 05:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.780 [2024-12-08 05:54:09.700695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.780 [2024-12-08 05:54:09.700871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72174 ] 00:06:47.039 [2024-12-08 05:54:09.850734] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.039 [2024-12-08 05:54:09.850817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.039 [2024-12-08 05:54:09.891639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72182 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72182 /var/tmp/spdk2.sock 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72182 ']' 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.039 05:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.298 [2024-12-08 05:54:10.198603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:47.298 [2024-12-08 05:54:10.198792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72182 ] 00:06:47.558 [2024-12-08 05:54:10.355617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.558 [2024-12-08 05:54:10.434645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.126 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.126 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.126 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72182 00:06:48.126 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.126 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72182 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72174 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72174 ']' 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72174 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72174 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.081 killing process with pid 72174 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72174' 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72174 00:06:49.081 05:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72174 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72182 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72182 ']' 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72182 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72182 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.650 killing process with pid 72182 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72182' 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72182 00:06:49.650 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72182 00:06:49.909 00:06:49.909 real 0m3.365s 00:06:49.909 user 0m3.862s 00:06:49.909 sys 0m1.059s 00:06:49.909 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.909 05:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.909 ************************************ 00:06:49.909 END TEST locking_app_on_unlocked_coremask 00:06:49.909 ************************************ 00:06:50.168 05:54:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:50.168 05:54:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.168 05:54:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.168 05:54:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 ************************************ 00:06:50.168 START TEST locking_app_on_locked_coremask 00:06:50.168 ************************************ 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72251 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72251 /var/tmp/spdk.sock 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72251 ']' 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.168 05:54:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 [2024-12-08 05:54:13.126518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:50.168 [2024-12-08 05:54:13.126702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72251 ] 00:06:50.427 [2024-12-08 05:54:13.268705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.427 [2024-12-08 05:54:13.304033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72267 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72267 /var/tmp/spdk2.sock 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72267 /var/tmp/spdk2.sock 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72267 /var/tmp/spdk2.sock 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72267 ']' 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.367 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.367 [2024-12-08 05:54:14.141591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:51.367 [2024-12-08 05:54:14.141749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:06:51.367 [2024-12-08 05:54:14.290688] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72251 has claimed it. 00:06:51.367 [2024-12-08 05:54:14.290776] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.934 ERROR: process (pid: 72267) is no longer running 00:06:51.934 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72267) - No such process 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72251 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72251 00:06:51.934 05:54:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.503 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72251 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72251 ']' 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72251 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72251 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.504 killing process with pid 72251 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72251' 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72251 00:06:52.504 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72251 00:06:52.761 00:06:52.761 real 0m2.613s 00:06:52.761 user 0m3.101s 00:06:52.761 sys 0m0.636s 00:06:52.762 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.762 05:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.762 ************************************ 00:06:52.762 END TEST locking_app_on_locked_coremask 00:06:52.762 ************************************ 00:06:52.762 05:54:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:52.762 05:54:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.762 05:54:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.762 05:54:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.762 ************************************ 00:06:52.762 START TEST locking_overlapped_coremask 00:06:52.762 ************************************ 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72315 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72315 /var/tmp/spdk.sock 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72315 ']' 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.762 05:54:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.762 [2024-12-08 05:54:15.769032] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:52.762 [2024-12-08 05:54:15.769456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72315 ] 00:06:53.020 [2024-12-08 05:54:15.908445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.020 [2024-12-08 05:54:15.947274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.020 [2024-12-08 05:54:15.947293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.020 [2024-12-08 05:54:15.947305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72333 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72333 /var/tmp/spdk2.sock 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72333 /var/tmp/spdk2.sock 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72333 /var/tmp/spdk2.sock 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72333 ']' 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.952 05:54:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.952 [2024-12-08 05:54:16.895380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:53.952 [2024-12-08 05:54:16.895830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72333 ] 00:06:54.211 [2024-12-08 05:54:17.054134] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72315 has claimed it. 00:06:54.211 [2024-12-08 05:54:17.054257] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.778 ERROR: process (pid: 72333) is no longer running 00:06:54.778 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72333) - No such process 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72315 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72315 ']' 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72315 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72315 00:06:54.778 killing process with pid 72315 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72315' 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72315 00:06:54.778 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72315 00:06:55.037 00:06:55.037 real 0m2.224s 00:06:55.037 user 0m6.357s 00:06:55.037 sys 0m0.464s 00:06:55.037 ************************************ 00:06:55.037 END TEST locking_overlapped_coremask 00:06:55.037 ************************************ 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.037 05:54:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.037 05:54:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.037 05:54:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.037 05:54:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.037 ************************************ 00:06:55.037 START TEST locking_overlapped_coremask_via_rpc 00:06:55.037 ************************************ 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72375 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72375 /var/tmp/spdk.sock 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.037 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72375 ']' 00:06:55.038 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.038 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.038 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.038 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.038 05:54:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.038 [2024-12-08 05:54:18.038302] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.038 [2024-12-08 05:54:18.038459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72375 ] 00:06:55.297 [2024-12-08 05:54:18.180410] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.297 [2024-12-08 05:54:18.180480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.297 [2024-12-08 05:54:18.223061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.297 [2024-12-08 05:54:18.223125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.297 [2024-12-08 05:54:18.223149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72385 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72385 /var/tmp/spdk2.sock 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72385 ']' 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.556 05:54:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.556 [2024-12-08 05:54:18.513673] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.556 [2024-12-08 05:54:18.514063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72385 ] 00:06:55.815 [2024-12-08 05:54:18.675886] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.815 [2024-12-08 05:54:18.675979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.815 [2024-12-08 05:54:18.752259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.815 [2024-12-08 05:54:18.758346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.815 [2024-12-08 05:54:18.758419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.382 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.641 [2024-12-08 05:54:19.428470] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72375 has claimed it. 00:06:56.641 request: 00:06:56.641 { 00:06:56.641 "method": "framework_enable_cpumask_locks", 00:06:56.641 "req_id": 1 00:06:56.641 } 00:06:56.641 Got JSON-RPC error response 00:06:56.641 response: 00:06:56.641 { 00:06:56.641 "code": -32603, 00:06:56.641 "message": "Failed to claim CPU core: 2" 00:06:56.641 } 00:06:56.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72375 /var/tmp/spdk.sock 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72375 ']' 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.641 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72385 /var/tmp/spdk2.sock 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72385 ']' 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.899 00:06:56.899 real 0m1.985s 00:06:56.899 user 0m1.135s 00:06:56.899 sys 0m0.145s 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.899 05:54:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.899 ************************************ 00:06:56.899 END TEST locking_overlapped_coremask_via_rpc 00:06:56.899 ************************************ 00:06:57.158 05:54:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.158 05:54:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72375 ]] 00:06:57.158 05:54:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72375 00:06:57.158 05:54:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72375 ']' 00:06:57.158 05:54:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72375 00:06:57.158 05:54:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.158 05:54:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.158 05:54:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72375 00:06:57.158 killing process with pid 72375 00:06:57.158 05:54:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.158 05:54:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.158 05:54:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72375' 00:06:57.158 05:54:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72375 00:06:57.158 05:54:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72375 00:06:57.416 05:54:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72385 ]] 00:06:57.416 05:54:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72385 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72385 ']' 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72385 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72385 00:06:57.416 killing process with pid 72385 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72385' 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72385 00:06:57.416 05:54:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72385 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72375 ]] 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72375 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72375 ']' 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72375 00:06:57.674 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72375) - No such process 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72375 is not found' 00:06:57.674 Process with pid 72375 is not found 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72385 ]] 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72385 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72385 ']' 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72385 00:06:57.674 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72385) - No such process 00:06:57.674 Process with pid 72385 is not found 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72385 is not found' 00:06:57.674 05:54:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.674 00:06:57.674 real 0m19.140s 00:06:57.674 user 0m33.249s 00:06:57.674 sys 0m5.342s 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.674 ************************************ 00:06:57.674 END TEST cpu_locks 00:06:57.674 ************************************ 00:06:57.674 05:54:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.932 ************************************ 00:06:57.932 END TEST event 00:06:57.932 ************************************ 00:06:57.932 00:06:57.932 real 0m48.050s 00:06:57.932 user 1m34.387s 00:06:57.932 sys 0m8.878s 00:06:57.932 05:54:20 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.932 05:54:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.932 05:54:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.932 05:54:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.932 05:54:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.932 05:54:20 -- common/autotest_common.sh@10 -- # set +x 00:06:57.932 ************************************ 00:06:57.932 START TEST thread 00:06:57.932 ************************************ 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.932 * Looking for test storage... 00:06:57.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.932 05:54:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.932 05:54:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.932 05:54:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.932 05:54:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.932 05:54:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.932 05:54:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.932 05:54:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.932 05:54:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.932 05:54:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.932 05:54:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.932 05:54:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.932 05:54:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.932 05:54:20 thread -- scripts/common.sh@345 -- # : 1 00:06:57.932 05:54:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.932 05:54:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.932 05:54:20 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.932 05:54:20 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.932 05:54:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.932 05:54:20 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.932 05:54:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.932 05:54:20 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.932 05:54:20 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.932 05:54:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.932 05:54:20 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.932 05:54:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.932 05:54:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.932 05:54:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.932 05:54:20 thread -- scripts/common.sh@368 -- # return 0 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.932 --rc genhtml_branch_coverage=1 00:06:57.932 --rc genhtml_function_coverage=1 00:06:57.932 --rc genhtml_legend=1 00:06:57.932 --rc geninfo_all_blocks=1 00:06:57.932 --rc geninfo_unexecuted_blocks=1 00:06:57.932 00:06:57.932 ' 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.932 --rc genhtml_branch_coverage=1 00:06:57.932 --rc genhtml_function_coverage=1 00:06:57.932 --rc genhtml_legend=1 00:06:57.932 --rc geninfo_all_blocks=1 00:06:57.932 --rc geninfo_unexecuted_blocks=1 00:06:57.932 00:06:57.932 ' 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.932 --rc genhtml_branch_coverage=1 00:06:57.932 --rc genhtml_function_coverage=1 00:06:57.932 --rc genhtml_legend=1 00:06:57.932 --rc geninfo_all_blocks=1 00:06:57.932 --rc geninfo_unexecuted_blocks=1 00:06:57.932 00:06:57.932 ' 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.932 --rc genhtml_branch_coverage=1 00:06:57.932 --rc genhtml_function_coverage=1 00:06:57.932 --rc genhtml_legend=1 00:06:57.932 --rc geninfo_all_blocks=1 00:06:57.932 --rc geninfo_unexecuted_blocks=1 00:06:57.932 00:06:57.932 ' 00:06:57.932 05:54:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.932 05:54:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.932 ************************************ 00:06:57.932 START TEST thread_poller_perf 00:06:57.932 ************************************ 00:06:57.932 05:54:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.932 [2024-12-08 05:54:20.967745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:57.932 [2024-12-08 05:54:20.968028] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72518 ] 00:06:58.189 [2024-12-08 05:54:21.109733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.189 [2024-12-08 05:54:21.143791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.189 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.559 [2024-12-08T05:54:22.604Z] ====================================== 00:06:59.559 [2024-12-08T05:54:22.604Z] busy:2213344816 (cyc) 00:06:59.559 [2024-12-08T05:54:22.604Z] total_run_count: 345000 00:06:59.559 [2024-12-08T05:54:22.604Z] tsc_hz: 2200000000 (cyc) 00:06:59.559 [2024-12-08T05:54:22.604Z] ====================================== 00:06:59.559 [2024-12-08T05:54:22.604Z] poller_cost: 6415 (cyc), 2915 (nsec) 00:06:59.559 00:06:59.559 real 0m1.281s 00:06:59.559 user 0m1.113s 00:06:59.559 sys 0m0.061s 00:06:59.559 05:54:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.559 ************************************ 00:06:59.559 05:54:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.559 END TEST thread_poller_perf 00:06:59.559 ************************************ 00:06:59.559 05:54:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.559 05:54:22 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.559 05:54:22 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.559 05:54:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.559 ************************************ 00:06:59.559 START TEST thread_poller_perf 00:06:59.559 ************************************ 00:06:59.559 05:54:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.559 [2024-12-08 05:54:22.312274] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:59.559 [2024-12-08 05:54:22.312749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72554 ] 00:06:59.559 [2024-12-08 05:54:22.460547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.559 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.559 [2024-12-08 05:54:22.497959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.934 [2024-12-08T05:54:23.979Z] ====================================== 00:07:00.934 [2024-12-08T05:54:23.979Z] busy:2204371570 (cyc) 00:07:00.934 [2024-12-08T05:54:23.979Z] total_run_count: 4177000 00:07:00.934 [2024-12-08T05:54:23.979Z] tsc_hz: 2200000000 (cyc) 00:07:00.934 [2024-12-08T05:54:23.979Z] ====================================== 00:07:00.934 [2024-12-08T05:54:23.979Z] poller_cost: 527 (cyc), 239 (nsec) 00:07:00.934 00:07:00.934 real 0m1.301s 00:07:00.934 user 0m1.117s 00:07:00.934 sys 0m0.075s 00:07:00.934 ************************************ 00:07:00.934 END TEST thread_poller_perf 00:07:00.934 ************************************ 00:07:00.934 05:54:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.934 05:54:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.934 05:54:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.934 00:07:00.934 real 0m2.853s 00:07:00.934 user 0m2.360s 00:07:00.934 sys 0m0.270s 00:07:00.934 ************************************ 00:07:00.934 END TEST thread 00:07:00.934 ************************************ 00:07:00.934 05:54:23 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.934 05:54:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.934 05:54:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:00.934 05:54:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.934 05:54:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.934 05:54:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.934 05:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:00.934 ************************************ 00:07:00.934 START TEST app_cmdline 00:07:00.934 ************************************ 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.934 * Looking for test storage... 00:07:00.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.934 05:54:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.934 --rc genhtml_branch_coverage=1 00:07:00.934 --rc genhtml_function_coverage=1 00:07:00.934 --rc genhtml_legend=1 00:07:00.934 --rc geninfo_all_blocks=1 00:07:00.934 --rc geninfo_unexecuted_blocks=1 00:07:00.934 00:07:00.934 ' 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.934 --rc genhtml_branch_coverage=1 00:07:00.934 --rc genhtml_function_coverage=1 00:07:00.934 --rc genhtml_legend=1 00:07:00.934 --rc geninfo_all_blocks=1 00:07:00.934 --rc geninfo_unexecuted_blocks=1 00:07:00.934 00:07:00.934 ' 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.934 --rc genhtml_branch_coverage=1 00:07:00.934 --rc genhtml_function_coverage=1 00:07:00.934 --rc genhtml_legend=1 00:07:00.934 --rc geninfo_all_blocks=1 00:07:00.934 --rc geninfo_unexecuted_blocks=1 00:07:00.934 00:07:00.934 ' 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.934 --rc genhtml_branch_coverage=1 00:07:00.934 --rc genhtml_function_coverage=1 00:07:00.934 --rc genhtml_legend=1 00:07:00.934 --rc geninfo_all_blocks=1 00:07:00.934 --rc geninfo_unexecuted_blocks=1 00:07:00.934 00:07:00.934 ' 00:07:00.934 05:54:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.934 05:54:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72638 00:07:00.934 05:54:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72638 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72638 ']' 00:07:00.934 05:54:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.934 05:54:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.934 [2024-12-08 05:54:23.973755] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.934 [2024-12-08 05:54:23.974203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72638 ] 00:07:01.192 [2024-12-08 05:54:24.122819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.192 [2024-12-08 05:54:24.158500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.126 05:54:24 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.126 05:54:24 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:02.126 05:54:24 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:02.384 { 00:07:02.384 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:02.384 "fields": { 00:07:02.384 "major": 24, 00:07:02.384 "minor": 9, 00:07:02.385 "patch": 1, 00:07:02.385 "suffix": "-pre", 00:07:02.385 "commit": "b18e1bd62" 00:07:02.385 } 00:07:02.385 } 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.385 05:54:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:02.385 05:54:25 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.643 request: 00:07:02.643 { 00:07:02.643 "method": "env_dpdk_get_mem_stats", 00:07:02.643 "req_id": 1 00:07:02.643 } 00:07:02.643 Got JSON-RPC error response 00:07:02.643 response: 00:07:02.643 { 00:07:02.643 "code": -32601, 00:07:02.643 "message": "Method not found" 00:07:02.643 } 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.643 05:54:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72638 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72638 ']' 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72638 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72638 00:07:02.643 killing process with pid 72638 00:07:02.643 05:54:25 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.644 05:54:25 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.644 05:54:25 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72638' 00:07:02.644 05:54:25 app_cmdline -- common/autotest_common.sh@969 -- # kill 72638 00:07:02.644 05:54:25 app_cmdline -- common/autotest_common.sh@974 -- # wait 72638 00:07:03.211 00:07:03.211 real 0m2.299s 00:07:03.211 user 0m2.978s 00:07:03.211 sys 0m0.471s 00:07:03.211 ************************************ 00:07:03.211 END TEST app_cmdline 00:07:03.211 ************************************ 00:07:03.211 05:54:25 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.211 05:54:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.211 05:54:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:03.211 05:54:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.211 05:54:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.211 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.211 ************************************ 00:07:03.211 START TEST version 00:07:03.211 ************************************ 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:03.211 * Looking for test storage... 00:07:03.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.211 05:54:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.211 05:54:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.211 05:54:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.211 05:54:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.211 05:54:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.211 05:54:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.211 05:54:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.211 05:54:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.211 05:54:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.211 05:54:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.211 05:54:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.211 05:54:26 version -- scripts/common.sh@344 -- # case "$op" in 00:07:03.211 05:54:26 version -- scripts/common.sh@345 -- # : 1 00:07:03.211 05:54:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.211 05:54:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.211 05:54:26 version -- scripts/common.sh@365 -- # decimal 1 00:07:03.211 05:54:26 version -- scripts/common.sh@353 -- # local d=1 00:07:03.211 05:54:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.211 05:54:26 version -- scripts/common.sh@355 -- # echo 1 00:07:03.211 05:54:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.211 05:54:26 version -- scripts/common.sh@366 -- # decimal 2 00:07:03.211 05:54:26 version -- scripts/common.sh@353 -- # local d=2 00:07:03.211 05:54:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.211 05:54:26 version -- scripts/common.sh@355 -- # echo 2 00:07:03.211 05:54:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.211 05:54:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.211 05:54:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.211 05:54:26 version -- scripts/common.sh@368 -- # return 0 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.211 --rc genhtml_branch_coverage=1 00:07:03.211 --rc genhtml_function_coverage=1 00:07:03.211 --rc genhtml_legend=1 00:07:03.211 --rc geninfo_all_blocks=1 00:07:03.211 --rc geninfo_unexecuted_blocks=1 00:07:03.211 00:07:03.211 ' 00:07:03.211 05:54:26 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.211 --rc genhtml_branch_coverage=1 00:07:03.211 --rc genhtml_function_coverage=1 00:07:03.212 --rc genhtml_legend=1 00:07:03.212 --rc geninfo_all_blocks=1 00:07:03.212 --rc geninfo_unexecuted_blocks=1 00:07:03.212 00:07:03.212 ' 00:07:03.212 05:54:26 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.212 --rc genhtml_branch_coverage=1 00:07:03.212 --rc genhtml_function_coverage=1 00:07:03.212 --rc genhtml_legend=1 00:07:03.212 --rc geninfo_all_blocks=1 00:07:03.212 --rc geninfo_unexecuted_blocks=1 00:07:03.212 00:07:03.212 ' 00:07:03.212 05:54:26 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.212 --rc genhtml_branch_coverage=1 00:07:03.212 --rc genhtml_function_coverage=1 00:07:03.212 --rc genhtml_legend=1 00:07:03.212 --rc geninfo_all_blocks=1 00:07:03.212 --rc geninfo_unexecuted_blocks=1 00:07:03.212 00:07:03.212 ' 00:07:03.212 05:54:26 version -- app/version.sh@17 -- # get_header_version major 00:07:03.212 05:54:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # cut -f2 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.212 05:54:26 version -- app/version.sh@17 -- # major=24 00:07:03.212 05:54:26 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.212 05:54:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # cut -f2 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.212 05:54:26 version -- app/version.sh@18 -- # minor=9 00:07:03.212 05:54:26 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.212 05:54:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # cut -f2 00:07:03.212 05:54:26 version -- app/version.sh@19 -- # patch=1 00:07:03.212 05:54:26 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.212 05:54:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # cut -f2 00:07:03.212 05:54:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.212 05:54:26 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.212 05:54:26 version -- app/version.sh@22 -- # version=24.9 00:07:03.212 05:54:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.212 05:54:26 version -- app/version.sh@25 -- # version=24.9.1 00:07:03.212 05:54:26 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:03.212 05:54:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:03.212 05:54:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.472 05:54:26 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:03.472 05:54:26 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:03.472 00:07:03.472 real 0m0.254s 00:07:03.472 user 0m0.164s 00:07:03.472 sys 0m0.128s 00:07:03.472 ************************************ 00:07:03.472 END TEST version 00:07:03.472 ************************************ 00:07:03.472 05:54:26 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.472 05:54:26 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.472 05:54:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:03.472 05:54:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:03.472 05:54:26 -- spdk/autotest.sh@194 -- # uname -s 00:07:03.472 05:54:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:03.472 05:54:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.472 05:54:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.472 05:54:26 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:03.472 05:54:26 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:03.472 05:54:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.472 05:54:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.472 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.472 ************************************ 00:07:03.472 START TEST blockdev_nvme 00:07:03.472 ************************************ 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:03.472 * Looking for test storage... 00:07:03.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.472 05:54:26 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.472 --rc genhtml_branch_coverage=1 00:07:03.472 --rc genhtml_function_coverage=1 00:07:03.472 --rc genhtml_legend=1 00:07:03.472 --rc geninfo_all_blocks=1 00:07:03.472 --rc geninfo_unexecuted_blocks=1 00:07:03.472 00:07:03.472 ' 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.472 --rc genhtml_branch_coverage=1 00:07:03.472 --rc genhtml_function_coverage=1 00:07:03.472 --rc genhtml_legend=1 00:07:03.472 --rc geninfo_all_blocks=1 00:07:03.472 --rc geninfo_unexecuted_blocks=1 00:07:03.472 00:07:03.472 ' 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.472 --rc genhtml_branch_coverage=1 00:07:03.472 --rc genhtml_function_coverage=1 00:07:03.472 --rc genhtml_legend=1 00:07:03.472 --rc geninfo_all_blocks=1 00:07:03.472 --rc geninfo_unexecuted_blocks=1 00:07:03.472 00:07:03.472 ' 00:07:03.472 05:54:26 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.472 --rc genhtml_branch_coverage=1 00:07:03.472 --rc genhtml_function_coverage=1 00:07:03.472 --rc genhtml_legend=1 00:07:03.472 --rc geninfo_all_blocks=1 00:07:03.472 --rc geninfo_unexecuted_blocks=1 00:07:03.472 00:07:03.472 ' 00:07:03.472 05:54:26 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.472 05:54:26 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.472 05:54:26 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:03.472 05:54:26 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.472 05:54:26 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:03.472 05:54:26 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:03.473 05:54:26 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:03.732 05:54:26 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72804 00:07:03.732 05:54:26 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:03.732 05:54:26 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:03.732 05:54:26 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72804 00:07:03.732 05:54:26 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72804 ']' 00:07:03.732 05:54:26 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.732 05:54:26 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.732 05:54:26 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.732 05:54:26 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.732 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.732 [2024-12-08 05:54:26.605307] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:03.732 [2024-12-08 05:54:26.605703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72804 ] 00:07:03.732 [2024-12-08 05:54:26.744342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.992 [2024-12-08 05:54:26.780838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.992 05:54:26 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.992 05:54:26 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:03.992 05:54:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:03.992 05:54:26 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:03.992 05:54:26 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:03.992 05:54:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:03.992 05:54:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:03.992 05:54:27 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:03.992 05:54:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.992 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.250 05:54:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.250 05:54:27 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:04.250 05:54:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.250 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d4f9cec9-8b10-4db1-84bf-767a4dbe29f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d4f9cec9-8b10-4db1-84bf-767a4dbe29f0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "34fde2b9-9dd4-48b9-8687-3a2f8420432b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "34fde2b9-9dd4-48b9-8687-3a2f8420432b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8ff9b989-386d-4efe-9fad-1b53e6994b66"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ff9b989-386d-4efe-9fad-1b53e6994b66",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7da481aa-3df2-41a2-a8a3-9fc167d0fd2b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7da481aa-3df2-41a2-a8a3-9fc167d0fd2b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6a4d7194-638e-49a0-87bf-2a64482e7c3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a4d7194-638e-49a0-87bf-2a64482e7c3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8201f403-121f-4f3f-9cb1-b138b40071d0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8201f403-121f-4f3f-9cb1-b138b40071d0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:04.521 05:54:27 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72804 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72804 ']' 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72804 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72804 00:07:04.521 killing process with pid 72804 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72804' 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72804 00:07:04.521 05:54:27 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72804 00:07:05.091 05:54:27 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.091 05:54:27 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.091 05:54:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:05.091 05:54:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.091 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.091 ************************************ 00:07:05.091 START TEST bdev_hello_world 00:07:05.091 ************************************ 00:07:05.091 05:54:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.091 [2024-12-08 05:54:27.955401] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:05.091 [2024-12-08 05:54:27.955838] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72870 ] 00:07:05.091 [2024-12-08 05:54:28.100296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.349 [2024-12-08 05:54:28.135277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.607 [2024-12-08 05:54:28.497149] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:05.607 [2024-12-08 05:54:28.497258] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:05.607 [2024-12-08 05:54:28.497296] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:05.607 [2024-12-08 05:54:28.499965] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:05.607 [2024-12-08 05:54:28.500647] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:05.607 [2024-12-08 05:54:28.500736] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:05.607 [2024-12-08 05:54:28.500957] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:05.607 00:07:05.607 [2024-12-08 05:54:28.500997] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:05.865 00:07:05.865 real 0m0.844s 00:07:05.865 user 0m0.571s 00:07:05.865 sys 0m0.168s 00:07:05.865 05:54:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.865 05:54:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:05.865 ************************************ 00:07:05.865 END TEST bdev_hello_world 00:07:05.865 ************************************ 00:07:05.865 05:54:28 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:05.865 05:54:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.865 05:54:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.865 05:54:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.865 ************************************ 00:07:05.865 START TEST bdev_bounds 00:07:05.865 ************************************ 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72901 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.865 Process bdevio pid: 72901 00:07:05.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72901' 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72901 00:07:05.865 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72901 ']' 00:07:05.866 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.866 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.866 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.866 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.866 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:05.866 [2024-12-08 05:54:28.836290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:05.866 [2024-12-08 05:54:28.836459] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72901 ] 00:07:06.124 [2024-12-08 05:54:28.976529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.124 [2024-12-08 05:54:29.014980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.124 [2024-12-08 05:54:29.015068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.124 [2024-12-08 05:54:29.015148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.060 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.061 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:07.061 05:54:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:07.061 I/O targets: 00:07:07.061 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:07.061 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:07.061 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.061 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.061 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.061 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:07.061 00:07:07.061 00:07:07.061 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.061 http://cunit.sourceforge.net/ 00:07:07.061 00:07:07.061 00:07:07.061 Suite: bdevio tests on: Nvme3n1 00:07:07.061 Test: blockdev write read block ...passed 00:07:07.061 Test: blockdev write zeroes read block ...passed 00:07:07.061 Test: blockdev write zeroes read no split ...passed 00:07:07.061 Test: blockdev write zeroes read split ...passed 00:07:07.061 Test: blockdev write zeroes read split partial ...passed 00:07:07.061 Test: blockdev reset ...[2024-12-08 05:54:30.032601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:07.061 [2024-12-08 05:54:30.035120] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.061 passed 00:07:07.061 Test: blockdev write read 8 blocks ...passed 00:07:07.061 Test: blockdev write read size > 128k ...passed 00:07:07.061 Test: blockdev write read invalid size ...passed 00:07:07.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.061 Test: blockdev write read max offset ...passed 00:07:07.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.061 Test: blockdev writev readv 8 blocks ...passed 00:07:07.061 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.061 Test: blockdev writev readv block ...passed 00:07:07.061 Test: blockdev writev readv size > 128k ...passed 00:07:07.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.061 Test: blockdev comparev and writev ...[2024-12-08 05:54:30.041998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e0a000 len:0x1000 00:07:07.061 [2024-12-08 05:54:30.042077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.061 passed 00:07:07.061 Test: blockdev nvme passthru rw ...passed 00:07:07.061 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.061 Test: blockdev nvme admin passthru ...[2024-12-08 05:54:30.042888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.061 [2024-12-08 05:54:30.042940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.061 passed 00:07:07.061 Test: blockdev copy ...passed 00:07:07.061 Suite: bdevio tests on: Nvme2n3 00:07:07.061 Test: blockdev write read block ...passed 00:07:07.061 Test: blockdev write zeroes read block ...passed 00:07:07.061 Test: blockdev write zeroes read no split ...passed 00:07:07.061 Test: blockdev write zeroes read split ...passed 00:07:07.061 Test: blockdev write zeroes read split partial ...passed 00:07:07.061 Test: blockdev reset ...[2024-12-08 05:54:30.054887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:07.061 [2024-12-08 05:54:30.057468] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.061 passed 00:07:07.061 Test: blockdev write read 8 blocks ...passed 00:07:07.061 Test: blockdev write read size > 128k ...passed 00:07:07.061 Test: blockdev write read invalid size ...passed 00:07:07.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.061 Test: blockdev write read max offset ...passed 00:07:07.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.061 Test: blockdev writev readv 8 blocks ...passed 00:07:07.061 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.061 Test: blockdev writev readv block ...passed 00:07:07.061 Test: blockdev writev readv size > 128k ...passed 00:07:07.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.061 Test: blockdev comparev and writev ...[2024-12-08 05:54:30.063517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:07.061 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:07:07.061 [2024-12-08 05:54:30.063747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.061 passed 00:07:07.061 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.061 Test: blockdev nvme admin passthru ...[2024-12-08 05:54:30.064577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.061 [2024-12-08 05:54:30.064626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.061 passed 00:07:07.061 Test: blockdev copy ...passed 00:07:07.061 Suite: bdevio tests on: Nvme2n2 00:07:07.061 Test: blockdev write read block ...passed 00:07:07.061 Test: blockdev write zeroes read block ...passed 00:07:07.061 Test: blockdev write zeroes read no split ...passed 00:07:07.061 Test: blockdev write zeroes read split ...passed 00:07:07.061 Test: blockdev write zeroes read split partial ...passed 00:07:07.061 Test: blockdev reset ...[2024-12-08 05:54:30.076889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:07.061 [2024-12-08 05:54:30.079487] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.061 passed 00:07:07.061 Test: blockdev write read 8 blocks ...passed 00:07:07.061 Test: blockdev write read size > 128k ...passed 00:07:07.061 Test: blockdev write read invalid size ...passed 00:07:07.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.061 Test: blockdev write read max offset ...passed 00:07:07.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.061 Test: blockdev writev readv 8 blocks ...passed 00:07:07.061 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.061 Test: blockdev writev readv block ...passed 00:07:07.061 Test: blockdev writev readv size > 128k ...passed 00:07:07.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.061 Test: blockdev comparev and writev ...[2024-12-08 05:54:30.085740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:07.061 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:07:07.061 [2024-12-08 05:54:30.085964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.061 passed 00:07:07.061 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.061 Test: blockdev nvme admin passthru ...[2024-12-08 05:54:30.086691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.061 [2024-12-08 05:54:30.086739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.061 passed 00:07:07.061 Test: blockdev copy ...passed 00:07:07.061 Suite: bdevio tests on: Nvme2n1 00:07:07.061 Test: blockdev write read block ...passed 00:07:07.061 Test: blockdev write zeroes read block ...passed 00:07:07.061 Test: blockdev write zeroes read no split ...passed 00:07:07.061 Test: blockdev write zeroes read split ...passed 00:07:07.061 Test: blockdev write zeroes read split partial ...passed 00:07:07.061 Test: blockdev reset ...[2024-12-08 05:54:30.098817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:07.061 [2024-12-08 05:54:30.101448] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.061 passed 00:07:07.061 Test: blockdev write read 8 blocks ...passed 00:07:07.061 Test: blockdev write read size > 128k ...passed 00:07:07.061 Test: blockdev write read invalid size ...passed 00:07:07.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.061 Test: blockdev write read max offset ...passed 00:07:07.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.321 Test: blockdev writev readv 8 blocks ...passed 00:07:07.321 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.321 Test: blockdev writev readv block ...passed 00:07:07.321 Test: blockdev writev readv size > 128k ...passed 00:07:07.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.321 Test: blockdev comparev and writev ...[2024-12-08 05:54:30.107530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:07.321 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:07:07.321 [2024-12-08 05:54:30.107713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.321 passed 00:07:07.321 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:54:30.108486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.321 [2024-12-08 05:54:30.108530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.321 passed 00:07:07.321 Test: blockdev nvme admin passthru ...passed 00:07:07.321 Test: blockdev copy ...passed 00:07:07.321 Suite: bdevio tests on: Nvme1n1 00:07:07.321 Test: blockdev write read block ...passed 00:07:07.321 Test: blockdev write zeroes read block ...passed 00:07:07.321 Test: blockdev write zeroes read no split ...passed 00:07:07.321 Test: blockdev write zeroes read split ...passed 00:07:07.321 Test: blockdev write zeroes read split partial ...passed 00:07:07.321 Test: blockdev reset ...[2024-12-08 05:54:30.120724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:07.321 passed 00:07:07.321 Test: blockdev write read 8 blocks ...[2024-12-08 05:54:30.122761] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.321 passed 00:07:07.321 Test: blockdev write read size > 128k ...passed 00:07:07.321 Test: blockdev write read invalid size ...passed 00:07:07.321 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.321 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.321 Test: blockdev write read max offset ...passed 00:07:07.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.321 Test: blockdev writev readv 8 blocks ...passed 00:07:07.321 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.321 Test: blockdev writev readv block ...passed 00:07:07.321 Test: blockdev writev readv size > 128k ...passed 00:07:07.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.321 Test: blockdev comparev and writev ...[2024-12-08 05:54:30.129004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:07.321 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c8236000 len:0x1000 00:07:07.321 [2024-12-08 05:54:30.129200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.321 passed 00:07:07.321 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.321 Test: blockdev nvme admin passthru ...[2024-12-08 05:54:30.129916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.321 [2024-12-08 05:54:30.129966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.321 passed 00:07:07.321 Test: blockdev copy ...passed 00:07:07.321 Suite: bdevio tests on: Nvme0n1 00:07:07.321 Test: blockdev write read block ...passed 00:07:07.321 Test: blockdev write zeroes read block ...passed 00:07:07.321 Test: blockdev write zeroes read no split ...passed 00:07:07.321 Test: blockdev write zeroes read split ...passed 00:07:07.322 Test: blockdev write zeroes read split partial ...passed 00:07:07.322 Test: blockdev reset ...[2024-12-08 05:54:30.143568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:07.322 passed 00:07:07.322 Test: blockdev write read 8 blocks ...[2024-12-08 05:54:30.145617] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.322 passed 00:07:07.322 Test: blockdev write read size > 128k ...passed 00:07:07.322 Test: blockdev write read invalid size ...passed 00:07:07.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.322 Test: blockdev write read max offset ...passed 00:07:07.322 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.322 Test: blockdev writev readv 8 blocks ...passed 00:07:07.322 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.322 Test: blockdev writev readv block ...passed 00:07:07.322 Test: blockdev writev readv size > 128k ...passed 00:07:07.322 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.322 Test: blockdev comparev and writev ...passed 00:07:07.322 Test: blockdev nvme passthru rw ...[2024-12-08 05:54:30.151061] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:07.322 separate metadata which is not supported yet. 00:07:07.322 passed 00:07:07.322 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:54:30.151580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:07.322 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:07.322 [2024-12-08 05:54:30.151776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:07.322 passed 00:07:07.322 Test: blockdev copy ...passed 00:07:07.322 00:07:07.322 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.322 suites 6 6 n/a 0 0 00:07:07.322 tests 138 138 138 0 0 00:07:07.322 asserts 893 893 893 0 n/a 00:07:07.322 00:07:07.322 Elapsed time = 0.324 seconds 00:07:07.322 0 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72901 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72901 ']' 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72901 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72901 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72901' 00:07:07.322 killing process with pid 72901 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72901 00:07:07.322 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72901 00:07:07.581 05:54:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:07.581 00:07:07.581 real 0m1.641s 00:07:07.581 user 0m4.395s 00:07:07.581 sys 0m0.312s 00:07:07.581 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.581 05:54:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.581 ************************************ 00:07:07.581 END TEST bdev_bounds 00:07:07.581 ************************************ 00:07:07.581 05:54:30 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.581 05:54:30 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:07.581 05:54:30 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.581 05:54:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.581 ************************************ 00:07:07.581 START TEST bdev_nbd 00:07:07.581 ************************************ 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:07.581 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72944 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72944 /var/tmp/spdk-nbd.sock 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72944 ']' 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.582 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.582 [2024-12-08 05:54:30.542519] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:07.582 [2024-12-08 05:54:30.542938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.841 [2024-12-08 05:54:30.684759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.841 [2024-12-08 05:54:30.724163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:08.779 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.039 1+0 records in 00:07:09.039 1+0 records out 00:07:09.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514094 s, 8.0 MB/s 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.039 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.299 1+0 records in 00:07:09.299 1+0 records out 00:07:09.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894342 s, 4.6 MB/s 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.299 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.558 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.559 1+0 records in 00:07:09.559 1+0 records out 00:07:09.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587502 s, 7.0 MB/s 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.559 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.818 1+0 records in 00:07:09.818 1+0 records out 00:07:09.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507443 s, 8.1 MB/s 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.818 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.076 1+0 records in 00:07:10.076 1+0 records out 00:07:10.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796206 s, 5.1 MB/s 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.076 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.077 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.645 1+0 records in 00:07:10.645 1+0 records out 00:07:10.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833905 s, 4.9 MB/s 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.645 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.646 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.646 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.646 05:54:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.646 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.646 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.646 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd0", 00:07:10.904 "bdev_name": "Nvme0n1" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd1", 00:07:10.904 "bdev_name": "Nvme1n1" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd2", 00:07:10.904 "bdev_name": "Nvme2n1" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd3", 00:07:10.904 "bdev_name": "Nvme2n2" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd4", 00:07:10.904 "bdev_name": "Nvme2n3" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd5", 00:07:10.904 "bdev_name": "Nvme3n1" 00:07:10.904 } 00:07:10.904 ]' 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd0", 00:07:10.904 "bdev_name": "Nvme0n1" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd1", 00:07:10.904 "bdev_name": "Nvme1n1" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd2", 00:07:10.904 "bdev_name": "Nvme2n1" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd3", 00:07:10.904 "bdev_name": "Nvme2n2" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd4", 00:07:10.904 "bdev_name": "Nvme2n3" 00:07:10.904 }, 00:07:10.904 { 00:07:10.904 "nbd_device": "/dev/nbd5", 00:07:10.904 "bdev_name": "Nvme3n1" 00:07:10.904 } 00:07:10.904 ]' 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.904 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.162 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.421 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.680 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.995 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:11.995 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.253 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.510 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.768 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:13.026 /dev/nbd0 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.026 1+0 records in 00:07:13.026 1+0 records out 00:07:13.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515961 s, 7.9 MB/s 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.026 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:13.284 /dev/nbd1 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.284 1+0 records in 00:07:13.284 1+0 records out 00:07:13.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459453 s, 8.9 MB/s 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.284 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:13.542 /dev/nbd10 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.542 1+0 records in 00:07:13.542 1+0 records out 00:07:13.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558175 s, 7.3 MB/s 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.542 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:13.799 /dev/nbd11 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.799 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.799 1+0 records in 00:07:13.800 1+0 records out 00:07:13.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639482 s, 6.4 MB/s 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.800 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:14.057 /dev/nbd12 00:07:14.057 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:14.057 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:14.057 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:14.057 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.057 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.057 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.058 1+0 records in 00:07:14.058 1+0 records out 00:07:14.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820602 s, 5.0 MB/s 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.058 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:14.626 /dev/nbd13 00:07:14.626 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:14.626 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:14.626 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:14.626 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.627 1+0 records in 00:07:14.627 1+0 records out 00:07:14.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687314 s, 6.0 MB/s 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.627 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd0", 00:07:14.886 "bdev_name": "Nvme0n1" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd1", 00:07:14.886 "bdev_name": "Nvme1n1" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd10", 00:07:14.886 "bdev_name": "Nvme2n1" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd11", 00:07:14.886 "bdev_name": "Nvme2n2" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd12", 00:07:14.886 "bdev_name": "Nvme2n3" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd13", 00:07:14.886 "bdev_name": "Nvme3n1" 00:07:14.886 } 00:07:14.886 ]' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd0", 00:07:14.886 "bdev_name": "Nvme0n1" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd1", 00:07:14.886 "bdev_name": "Nvme1n1" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd10", 00:07:14.886 "bdev_name": "Nvme2n1" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd11", 00:07:14.886 "bdev_name": "Nvme2n2" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd12", 00:07:14.886 "bdev_name": "Nvme2n3" 00:07:14.886 }, 00:07:14.886 { 00:07:14.886 "nbd_device": "/dev/nbd13", 00:07:14.886 "bdev_name": "Nvme3n1" 00:07:14.886 } 00:07:14.886 ]' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.886 /dev/nbd1 00:07:14.886 /dev/nbd10 00:07:14.886 /dev/nbd11 00:07:14.886 /dev/nbd12 00:07:14.886 /dev/nbd13' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.886 /dev/nbd1 00:07:14.886 /dev/nbd10 00:07:14.886 /dev/nbd11 00:07:14.886 /dev/nbd12 00:07:14.886 /dev/nbd13' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:14.886 256+0 records in 00:07:14.886 256+0 records out 00:07:14.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108991 s, 96.2 MB/s 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.886 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.144 256+0 records in 00:07:15.144 256+0 records out 00:07:15.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17198 s, 6.1 MB/s 00:07:15.145 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.145 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.145 256+0 records in 00:07:15.145 256+0 records out 00:07:15.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163757 s, 6.4 MB/s 00:07:15.145 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.145 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:15.403 256+0 records in 00:07:15.403 256+0 records out 00:07:15.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17188 s, 6.1 MB/s 00:07:15.403 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.403 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:15.662 256+0 records in 00:07:15.662 256+0 records out 00:07:15.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17774 s, 5.9 MB/s 00:07:15.662 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.662 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:15.662 256+0 records in 00:07:15.662 256+0 records out 00:07:15.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173458 s, 6.0 MB/s 00:07:15.662 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.662 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:15.921 256+0 records in 00:07:15.921 256+0 records out 00:07:15.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169121 s, 6.2 MB/s 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.921 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.180 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.755 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.756 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:17.015 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.275 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:17.535 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:17.535 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:17.535 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:17.535 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.535 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.535 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.794 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:18.052 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:18.311 malloc_lvol_verify 00:07:18.311 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:18.569 072ccca0-61fd-43d0-97f7-cd7852a87103 00:07:18.569 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:18.826 d475f74d-64e0-45a2-9d72-f46a01ab580a 00:07:18.826 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:19.084 /dev/nbd0 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:19.084 mke2fs 1.47.0 (5-Feb-2023) 00:07:19.084 Discarding device blocks: 0/4096 done 00:07:19.084 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:19.084 00:07:19.084 Allocating group tables: 0/1 done 00:07:19.084 Writing inode tables: 0/1 done 00:07:19.084 Creating journal (1024 blocks): done 00:07:19.084 Writing superblocks and filesystem accounting information: 0/1 done 00:07:19.084 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.084 05:54:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72944 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72944 ']' 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72944 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72944 00:07:19.342 killing process with pid 72944 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72944' 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72944 00:07:19.342 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72944 00:07:19.601 05:54:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:19.601 00:07:19.601 real 0m12.010s 00:07:19.601 user 0m17.634s 00:07:19.601 sys 0m3.877s 00:07:19.601 ************************************ 00:07:19.601 END TEST bdev_nbd 00:07:19.601 ************************************ 00:07:19.601 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.601 05:54:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:19.601 05:54:42 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:19.601 05:54:42 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:19.601 skipping fio tests on NVMe due to multi-ns failures. 00:07:19.601 05:54:42 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:19.601 05:54:42 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:19.601 05:54:42 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:19.601 05:54:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:19.601 05:54:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.601 05:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.601 ************************************ 00:07:19.601 START TEST bdev_verify 00:07:19.601 ************************************ 00:07:19.601 05:54:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:19.601 [2024-12-08 05:54:42.617590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.601 [2024-12-08 05:54:42.617788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73344 ] 00:07:19.859 [2024-12-08 05:54:42.763779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.859 [2024-12-08 05:54:42.799360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.859 [2024-12-08 05:54:42.799420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.426 Running I/O for 5 seconds... 00:07:22.733 21120.00 IOPS, 82.50 MiB/s [2024-12-08T05:54:46.727Z] 19968.00 IOPS, 78.00 MiB/s [2024-12-08T05:54:47.661Z] 19776.00 IOPS, 77.25 MiB/s [2024-12-08T05:54:48.594Z] 19232.00 IOPS, 75.12 MiB/s [2024-12-08T05:54:48.594Z] 18739.20 IOPS, 73.20 MiB/s 00:07:25.549 Latency(us) 00:07:25.549 [2024-12-08T05:54:48.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.549 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x0 length 0xbd0bd 00:07:25.549 Nvme0n1 : 5.09 1534.16 5.99 0.00 0.00 83225.13 16324.42 76260.07 00:07:25.549 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:25.549 Nvme0n1 : 5.10 1556.34 6.08 0.00 0.00 82040.09 16324.42 73400.32 00:07:25.549 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x0 length 0xa0000 00:07:25.549 Nvme1n1 : 5.09 1533.66 5.99 0.00 0.00 83037.72 16324.42 76260.07 00:07:25.549 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0xa0000 length 0xa0000 00:07:25.549 Nvme1n1 : 5.10 1555.72 6.08 0.00 0.00 81955.96 16324.42 70540.57 00:07:25.549 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x0 length 0x80000 00:07:25.549 Nvme2n1 : 5.09 1533.22 5.99 0.00 0.00 82925.57 15966.95 73876.95 00:07:25.549 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x80000 length 0x80000 00:07:25.549 Nvme2n1 : 5.11 1554.47 6.07 0.00 0.00 81816.09 18588.39 66727.56 00:07:25.549 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x0 length 0x80000 00:07:25.549 Nvme2n2 : 5.09 1532.83 5.99 0.00 0.00 82800.28 15847.80 72923.69 00:07:25.549 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x80000 length 0x80000 00:07:25.549 Nvme2n2 : 5.11 1553.24 6.07 0.00 0.00 81723.14 20733.21 67204.19 00:07:25.549 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x0 length 0x80000 00:07:25.549 Nvme2n3 : 5.10 1532.38 5.99 0.00 0.00 82688.27 15073.28 72923.69 00:07:25.549 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x80000 length 0x80000 00:07:25.549 Nvme2n3 : 5.11 1552.03 6.06 0.00 0.00 81632.90 16205.27 70063.94 00:07:25.549 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x0 length 0x20000 00:07:25.549 Nvme3n1 : 5.10 1531.90 5.98 0.00 0.00 82580.80 10962.39 75783.45 00:07:25.549 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.549 Verification LBA range: start 0x20000 length 0x20000 00:07:25.549 Nvme3n1 : 5.12 1550.79 6.06 0.00 0.00 81546.98 9949.56 73876.95 00:07:25.549 [2024-12-08T05:54:48.594Z] =================================================================================================================== 00:07:25.549 [2024-12-08T05:54:48.594Z] Total : 18520.74 72.35 0.00 0.00 82326.65 9949.56 76260.07 00:07:25.806 00:07:25.807 real 0m6.288s 00:07:25.807 user 0m11.727s 00:07:25.807 sys 0m0.247s 00:07:25.807 05:54:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.807 05:54:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.807 ************************************ 00:07:25.807 END TEST bdev_verify 00:07:25.807 ************************************ 00:07:26.065 05:54:48 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.065 05:54:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:26.065 05:54:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.065 05:54:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.065 ************************************ 00:07:26.065 START TEST bdev_verify_big_io 00:07:26.065 ************************************ 00:07:26.065 05:54:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.065 [2024-12-08 05:54:48.963938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.065 [2024-12-08 05:54:48.964147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73431 ] 00:07:26.323 [2024-12-08 05:54:49.110602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.323 [2024-12-08 05:54:49.144788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.323 [2024-12-08 05:54:49.144858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.582 Running I/O for 5 seconds... 00:07:31.317 1717.00 IOPS, 107.31 MiB/s [2024-12-08T05:54:54.620Z] 2555.00 IOPS, 159.69 MiB/s [2024-12-08T05:54:55.557Z] 2057.00 IOPS, 128.56 MiB/s [2024-12-08T05:54:55.557Z] 2294.25 IOPS, 143.39 MiB/s 00:07:32.512 Latency(us) 00:07:32.512 [2024-12-08T05:54:55.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.512 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x0 length 0xbd0b 00:07:32.512 Nvme0n1 : 5.67 135.44 8.46 0.00 0.00 911930.72 18945.86 957063.91 00:07:32.512 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:32.512 Nvme0n1 : 5.67 135.35 8.46 0.00 0.00 914555.04 26929.34 953250.91 00:07:32.512 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x0 length 0xa000 00:07:32.512 Nvme1n1 : 5.68 135.12 8.44 0.00 0.00 887623.99 84839.33 796917.76 00:07:32.512 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0xa000 length 0xa000 00:07:32.512 Nvme1n1 : 5.68 135.29 8.46 0.00 0.00 890688.23 85792.58 781665.75 00:07:32.512 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x0 length 0x8000 00:07:32.512 Nvme2n1 : 5.81 136.42 8.53 0.00 0.00 853460.69 120109.61 709218.68 00:07:32.512 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x8000 length 0x8000 00:07:32.512 Nvme2n1 : 5.73 138.11 8.63 0.00 0.00 851255.77 55526.87 789291.75 00:07:32.512 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x0 length 0x8000 00:07:32.512 Nvme2n2 : 5.84 142.56 8.91 0.00 0.00 805848.62 17635.14 690153.66 00:07:32.512 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x8000 length 0x8000 00:07:32.512 Nvme2n2 : 5.76 144.33 9.02 0.00 0.00 799759.47 27167.65 815982.78 00:07:32.512 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x0 length 0x8000 00:07:32.512 Nvme2n3 : 5.85 146.02 9.13 0.00 0.00 766725.77 6762.12 896055.85 00:07:32.512 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x8000 length 0x8000 00:07:32.512 Nvme2n3 : 5.79 150.49 9.41 0.00 0.00 746113.01 24188.74 831234.79 00:07:32.512 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x0 length 0x2000 00:07:32.512 Nvme3n1 : 5.85 143.99 9.00 0.00 0.00 754193.26 2636.33 1609087.53 00:07:32.512 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.512 Verification LBA range: start 0x2000 length 0x2000 00:07:32.512 Nvme3n1 : 5.80 158.67 9.92 0.00 0.00 687871.61 1355.40 850299.81 00:07:32.512 [2024-12-08T05:54:55.557Z] =================================================================================================================== 00:07:32.512 [2024-12-08T05:54:55.557Z] Total : 1701.79 106.36 0.00 0.00 818487.34 1355.40 1609087.53 00:07:33.079 00:07:33.079 real 0m7.217s 00:07:33.079 user 0m13.591s 00:07:33.079 sys 0m0.241s 00:07:33.079 05:54:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.079 05:54:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:33.079 ************************************ 00:07:33.079 END TEST bdev_verify_big_io 00:07:33.079 ************************************ 00:07:33.337 05:54:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.337 05:54:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:33.337 05:54:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.337 05:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.337 ************************************ 00:07:33.337 START TEST bdev_write_zeroes 00:07:33.337 ************************************ 00:07:33.337 05:54:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.337 [2024-12-08 05:54:56.233202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.337 [2024-12-08 05:54:56.233396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73529 ] 00:07:33.595 [2024-12-08 05:54:56.386075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.595 [2024-12-08 05:54:56.428175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.853 Running I/O for 1 seconds... 00:07:35.227 51456.00 IOPS, 201.00 MiB/s 00:07:35.227 Latency(us) 00:07:35.227 [2024-12-08T05:54:58.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.227 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.227 Nvme0n1 : 1.02 8591.01 33.56 0.00 0.00 14867.49 11379.43 37653.41 00:07:35.227 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.227 Nvme1n1 : 1.02 8582.08 33.52 0.00 0.00 14862.90 11558.17 39798.23 00:07:35.227 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.227 Nvme2n1 : 1.02 8572.23 33.49 0.00 0.00 14818.63 11498.59 36938.47 00:07:35.227 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.227 Nvme2n2 : 1.02 8563.23 33.45 0.00 0.00 14764.51 11439.01 37891.72 00:07:35.227 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.227 Nvme2n3 : 1.02 8554.30 33.42 0.00 0.00 14727.00 9770.82 37415.10 00:07:35.227 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.227 Nvme3n1 : 1.03 8544.89 33.38 0.00 0.00 14691.50 8400.52 36938.47 00:07:35.227 [2024-12-08T05:54:58.272Z] =================================================================================================================== 00:07:35.227 [2024-12-08T05:54:58.272Z] Total : 51407.74 200.81 0.00 0.00 14788.67 8400.52 39798.23 00:07:35.227 00:07:35.227 real 0m1.920s 00:07:35.227 user 0m1.623s 00:07:35.227 sys 0m0.180s 00:07:35.227 05:54:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.227 05:54:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:35.227 ************************************ 00:07:35.227 END TEST bdev_write_zeroes 00:07:35.227 ************************************ 00:07:35.227 05:54:58 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.227 05:54:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:35.227 05:54:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.227 05:54:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.227 ************************************ 00:07:35.227 START TEST bdev_json_nonenclosed 00:07:35.227 ************************************ 00:07:35.227 05:54:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.227 [2024-12-08 05:54:58.208029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.227 [2024-12-08 05:54:58.208232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73571 ] 00:07:35.486 [2024-12-08 05:54:58.353220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.486 [2024-12-08 05:54:58.389765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.486 [2024-12-08 05:54:58.389937] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:35.486 [2024-12-08 05:54:58.389979] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.486 [2024-12-08 05:54:58.389997] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.486 00:07:35.486 real 0m0.389s 00:07:35.486 user 0m0.174s 00:07:35.486 sys 0m0.111s 00:07:35.486 05:54:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.486 05:54:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:35.486 ************************************ 00:07:35.486 END TEST bdev_json_nonenclosed 00:07:35.486 ************************************ 00:07:35.745 05:54:58 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.745 05:54:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:35.745 05:54:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.745 05:54:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.745 ************************************ 00:07:35.745 START TEST bdev_json_nonarray 00:07:35.745 ************************************ 00:07:35.745 05:54:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.745 [2024-12-08 05:54:58.623901] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.745 [2024-12-08 05:54:58.624071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73597 ] 00:07:35.745 [2024-12-08 05:54:58.756390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.004 [2024-12-08 05:54:58.792455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.004 [2024-12-08 05:54:58.792608] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:36.004 [2024-12-08 05:54:58.792646] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:36.004 [2024-12-08 05:54:58.792661] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.004 00:07:36.004 real 0m0.345s 00:07:36.004 user 0m0.149s 00:07:36.004 sys 0m0.094s 00:07:36.004 05:54:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.004 05:54:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:36.004 ************************************ 00:07:36.004 END TEST bdev_json_nonarray 00:07:36.004 ************************************ 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:36.004 05:54:58 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:36.004 00:07:36.004 real 0m32.611s 00:07:36.004 user 0m51.548s 00:07:36.004 sys 0m5.979s 00:07:36.004 05:54:58 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.004 05:54:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.004 ************************************ 00:07:36.004 END TEST blockdev_nvme 00:07:36.004 ************************************ 00:07:36.004 05:54:58 -- spdk/autotest.sh@209 -- # uname -s 00:07:36.004 05:54:58 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:36.004 05:54:58 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:36.004 05:54:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:36.004 05:54:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.004 05:54:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.004 ************************************ 00:07:36.004 START TEST blockdev_nvme_gpt 00:07:36.004 ************************************ 00:07:36.004 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:36.264 * Looking for test storage... 00:07:36.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.264 05:54:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73675 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73675 00:07:36.264 05:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73675 ']' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.264 05:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.523 [2024-12-08 05:54:59.317926] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.523 [2024-12-08 05:54:59.318132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:07:36.523 [2024-12-08 05:54:59.468505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.523 [2024-12-08 05:54:59.509567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.460 05:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.460 05:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:37.460 05:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:37.460 05:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:37.460 05:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:37.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.976 Waiting for block devices as requested 00:07:37.976 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.976 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.234 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.234 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.497 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:43.497 BYT; 00:07:43.497 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:43.497 BYT; 00:07:43.497 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:43.497 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:43.497 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:43.497 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:43.497 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:43.497 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:43.497 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:43.497 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:43.498 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:43.498 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:43.498 05:55:06 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:43.498 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:43.498 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:44.433 The operation has completed successfully. 00:07:44.433 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:45.396 The operation has completed successfully. 00:07:45.396 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.530 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.530 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.530 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.530 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.788 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:46.788 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.788 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.788 [] 00:07:46.788 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.788 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:46.788 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:46.788 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:46.788 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.788 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:46.788 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.788 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.046 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.046 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:47.046 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.046 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.046 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.046 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:47.046 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.046 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.046 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:47.046 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:47.046 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.046 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:47.306 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.306 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:47.306 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:47.306 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b7b4414a-6791-4487-b3f3-eb3a958e8ccd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b7b4414a-6791-4487-b3f3-eb3a958e8ccd",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f64c5c51-16ec-47b8-ad4b-754e514c505b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f64c5c51-16ec-47b8-ad4b-754e514c505b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "695c9252-235c-4124-8d66-2cc409b20d3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "695c9252-235c-4124-8d66-2cc409b20d3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "46309e9c-a72a-4725-b464-bacd62db1ae4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "46309e9c-a72a-4725-b464-bacd62db1ae4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7e1df359-0703-4e18-81bb-43b8d81e9f13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7e1df359-0703-4e18-81bb-43b8d81e9f13",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:47.306 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:47.306 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:47.307 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:47.307 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73675 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73675 ']' 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73675 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73675 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.307 killing process with pid 73675 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73675' 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73675 00:07:47.307 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73675 00:07:47.565 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:47.565 05:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:47.565 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:47.565 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.565 05:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.565 ************************************ 00:07:47.565 START TEST bdev_hello_world 00:07:47.565 ************************************ 00:07:47.565 05:55:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:47.824 [2024-12-08 05:55:10.635838] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.824 [2024-12-08 05:55:10.636019] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74284 ] 00:07:47.824 [2024-12-08 05:55:10.783423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.824 [2024-12-08 05:55:10.816617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.391 [2024-12-08 05:55:11.186727] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:48.391 [2024-12-08 05:55:11.186796] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:48.391 [2024-12-08 05:55:11.186844] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:48.391 [2024-12-08 05:55:11.189276] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:48.391 [2024-12-08 05:55:11.189831] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:48.391 [2024-12-08 05:55:11.189881] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:48.391 [2024-12-08 05:55:11.190162] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:48.391 00:07:48.391 [2024-12-08 05:55:11.190234] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:48.391 00:07:48.391 real 0m0.840s 00:07:48.391 user 0m0.579s 00:07:48.391 sys 0m0.157s 00:07:48.391 05:55:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.391 05:55:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:48.391 ************************************ 00:07:48.391 END TEST bdev_hello_world 00:07:48.391 ************************************ 00:07:48.391 05:55:11 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:48.391 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.391 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.391 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.650 ************************************ 00:07:48.650 START TEST bdev_bounds 00:07:48.650 ************************************ 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74315 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.650 Process bdevio pid: 74315 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74315' 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74315 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:48.650 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74315 ']' 00:07:48.651 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.651 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.651 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.651 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.651 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:48.651 [2024-12-08 05:55:11.534767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.651 [2024-12-08 05:55:11.534972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74315 ] 00:07:48.651 [2024-12-08 05:55:11.682005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.909 [2024-12-08 05:55:11.724360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.909 [2024-12-08 05:55:11.724491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.909 [2024-12-08 05:55:11.724570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.846 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.846 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:49.846 05:55:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:49.846 I/O targets: 00:07:49.846 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:49.846 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:49.846 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:49.846 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:49.846 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:49.846 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:49.846 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:49.846 00:07:49.846 00:07:49.846 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.846 http://cunit.sourceforge.net/ 00:07:49.846 00:07:49.846 00:07:49.846 Suite: bdevio tests on: Nvme3n1 00:07:49.846 Test: blockdev write read block ...passed 00:07:49.846 Test: blockdev write zeroes read block ...passed 00:07:49.846 Test: blockdev write zeroes read no split ...passed 00:07:49.846 Test: blockdev write zeroes read split ...passed 00:07:49.846 Test: blockdev write zeroes read split partial ...passed 00:07:49.846 Test: blockdev reset ...[2024-12-08 05:55:12.676921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:49.846 [2024-12-08 05:55:12.679405] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.846 passed 00:07:49.846 Test: blockdev write read 8 blocks ...passed 00:07:49.846 Test: blockdev write read size > 128k ...passed 00:07:49.846 Test: blockdev write read invalid size ...passed 00:07:49.846 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.846 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.846 Test: blockdev write read max offset ...passed 00:07:49.846 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.847 Test: blockdev writev readv 8 blocks ...passed 00:07:49.847 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.847 Test: blockdev writev readv block ...passed 00:07:49.847 Test: blockdev writev readv size > 128k ...passed 00:07:49.847 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.847 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.685911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c320a000 len:0x1000 00:07:49.847 [2024-12-08 05:55:12.686063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme passthru rw ...passed 00:07:49.847 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:55:12.687103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.847 [2024-12-08 05:55:12.687337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme admin passthru ...passed 00:07:49.847 Test: blockdev copy ...passed 00:07:49.847 Suite: bdevio tests on: Nvme2n3 00:07:49.847 Test: blockdev write read block ...passed 00:07:49.847 Test: blockdev write zeroes read block ...passed 00:07:49.847 Test: blockdev write zeroes read no split ...passed 00:07:49.847 Test: blockdev write zeroes read split ...passed 00:07:49.847 Test: blockdev write zeroes read split partial ...passed 00:07:49.847 Test: blockdev reset ...[2024-12-08 05:55:12.711973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:49.847 [2024-12-08 05:55:12.714681] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.847 passed 00:07:49.847 Test: blockdev write read 8 blocks ...passed 00:07:49.847 Test: blockdev write read size > 128k ...passed 00:07:49.847 Test: blockdev write read invalid size ...passed 00:07:49.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.847 Test: blockdev write read max offset ...passed 00:07:49.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.847 Test: blockdev writev readv 8 blocks ...passed 00:07:49.847 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.847 Test: blockdev writev readv block ...passed 00:07:49.847 Test: blockdev writev readv size > 128k ...passed 00:07:49.847 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.847 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0404000 len:0x1000 00:07:49.847 [2024-12-08 05:55:12.720601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme passthru rw ...passed 00:07:49.847 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:55:12.721440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.847 [2024-12-08 05:55:12.721567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme admin passthru ...passed 00:07:49.847 Test: blockdev copy ...passed 00:07:49.847 Suite: bdevio tests on: Nvme2n2 00:07:49.847 Test: blockdev write read block ...passed 00:07:49.847 Test: blockdev write zeroes read block ...passed 00:07:49.847 Test: blockdev write zeroes read no split ...passed 00:07:49.847 Test: blockdev write zeroes read split ...passed 00:07:49.847 Test: blockdev write zeroes read split partial ...passed 00:07:49.847 Test: blockdev reset ...[2024-12-08 05:55:12.736226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:49.847 [2024-12-08 05:55:12.738716] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.847 passed 00:07:49.847 Test: blockdev write read 8 blocks ...passed 00:07:49.847 Test: blockdev write read size > 128k ...passed 00:07:49.847 Test: blockdev write read invalid size ...passed 00:07:49.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.847 Test: blockdev write read max offset ...passed 00:07:49.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.847 Test: blockdev writev readv 8 blocks ...passed 00:07:49.847 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.847 Test: blockdev writev readv block ...passed 00:07:49.847 Test: blockdev writev readv size > 128k ...passed 00:07:49.847 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.847 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.744858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0404000 len:0x1000 00:07:49.847 [2024-12-08 05:55:12.745012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme passthru rw ...passed 00:07:49.847 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:55:12.745843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.847 [2024-12-08 05:55:12.745981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme admin passthru ...passed 00:07:49.847 Test: blockdev copy ...passed 00:07:49.847 Suite: bdevio tests on: Nvme2n1 00:07:49.847 Test: blockdev write read block ...passed 00:07:49.847 Test: blockdev write zeroes read block ...passed 00:07:49.847 Test: blockdev write zeroes read no split ...passed 00:07:49.847 Test: blockdev write zeroes read split ...passed 00:07:49.847 Test: blockdev write zeroes read split partial ...passed 00:07:49.847 Test: blockdev reset ...[2024-12-08 05:55:12.760564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:49.847 [2024-12-08 05:55:12.762952] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.847 passed 00:07:49.847 Test: blockdev write read 8 blocks ...passed 00:07:49.847 Test: blockdev write read size > 128k ...passed 00:07:49.847 Test: blockdev write read invalid size ...passed 00:07:49.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.847 Test: blockdev write read max offset ...passed 00:07:49.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.847 Test: blockdev writev readv 8 blocks ...passed 00:07:49.847 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.847 Test: blockdev writev readv block ...passed 00:07:49.847 Test: blockdev writev readv size > 128k ...passed 00:07:49.847 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.847 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.769375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0406000 len:0x1000 00:07:49.847 [2024-12-08 05:55:12.769526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme passthru rw ...passed 00:07:49.847 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:55:12.770499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.847 [2024-12-08 05:55:12.770620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme admin passthru ...passed 00:07:49.847 Test: blockdev copy ...passed 00:07:49.847 Suite: bdevio tests on: Nvme1n1p2 00:07:49.847 Test: blockdev write read block ...passed 00:07:49.847 Test: blockdev write zeroes read block ...passed 00:07:49.847 Test: blockdev write zeroes read no split ...passed 00:07:49.847 Test: blockdev write zeroes read split ...passed 00:07:49.847 Test: blockdev write zeroes read split partial ...passed 00:07:49.847 Test: blockdev reset ...[2024-12-08 05:55:12.784138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:49.847 [2024-12-08 05:55:12.786255] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.847 passed 00:07:49.847 Test: blockdev write read 8 blocks ...passed 00:07:49.847 Test: blockdev write read size > 128k ...passed 00:07:49.847 Test: blockdev write read invalid size ...passed 00:07:49.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.847 Test: blockdev write read max offset ...passed 00:07:49.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.847 Test: blockdev writev readv 8 blocks ...passed 00:07:49.847 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.847 Test: blockdev writev readv block ...passed 00:07:49.847 Test: blockdev writev readv size > 128k ...passed 00:07:49.847 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.847 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.792324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c0402000 len:0x1000 00:07:49.847 passed 00:07:49.847 Test: blockdev nvme passthru rw ...passed 00:07:49.847 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.847 Test: blockdev nvme admin passthru ...passed 00:07:49.847 Test: blockdev copy ...[2024-12-08 05:55:12.792587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.847 passed 00:07:49.847 Suite: bdevio tests on: Nvme1n1p1 00:07:49.847 Test: blockdev write read block ...passed 00:07:49.847 Test: blockdev write zeroes read block ...passed 00:07:49.847 Test: blockdev write zeroes read no split ...passed 00:07:49.847 Test: blockdev write zeroes read split ...passed 00:07:49.847 Test: blockdev write zeroes read split partial ...passed 00:07:49.847 Test: blockdev reset ...[2024-12-08 05:55:12.805185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:49.847 passed 00:07:49.847 Test: blockdev write read 8 blocks ...[2024-12-08 05:55:12.807141] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.847 passed 00:07:49.847 Test: blockdev write read size > 128k ...passed 00:07:49.847 Test: blockdev write read invalid size ...passed 00:07:49.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.847 Test: blockdev write read max offset ...passed 00:07:49.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.848 Test: blockdev writev readv 8 blocks ...passed 00:07:49.848 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.848 Test: blockdev writev readv block ...passed 00:07:49.848 Test: blockdev writev readv size > 128k ...passed 00:07:49.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.848 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.813101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c643b000 len:0x1000 00:07:49.848 [2024-12-08 05:55:12.813161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.848 passed 00:07:49.848 Test: blockdev nvme passthru rw ...passed 00:07:49.848 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.848 Test: blockdev nvme admin passthru ...passed 00:07:49.848 Test: blockdev copy ...passed 00:07:49.848 Suite: bdevio tests on: Nvme0n1 00:07:49.848 Test: blockdev write read block ...passed 00:07:49.848 Test: blockdev write zeroes read block ...passed 00:07:49.848 Test: blockdev write zeroes read no split ...passed 00:07:49.848 Test: blockdev write zeroes read split ...passed 00:07:49.848 Test: blockdev write zeroes read split partial ...passed 00:07:49.848 Test: blockdev reset ...[2024-12-08 05:55:12.827546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:49.848 passed 00:07:49.848 Test: blockdev write read 8 blocks ...[2024-12-08 05:55:12.829817] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.848 passed 00:07:49.848 Test: blockdev write read size > 128k ...passed 00:07:49.848 Test: blockdev write read invalid size ...passed 00:07:49.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.848 Test: blockdev write read max offset ...passed 00:07:49.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.848 Test: blockdev writev readv 8 blocks ...passed 00:07:49.848 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.848 Test: blockdev writev readv block ...passed 00:07:49.848 Test: blockdev writev readv size > 128k ...passed 00:07:49.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.848 Test: blockdev comparev and writev ...[2024-12-08 05:55:12.835813] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:49.848 separate metadata which is not supported yet. 00:07:49.848 passed 00:07:49.848 Test: blockdev nvme passthru rw ...passed 00:07:49.848 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:55:12.836809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:49.848 [2024-12-08 05:55:12.836865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:49.848 passed 00:07:49.848 Test: blockdev nvme admin passthru ...passed 00:07:49.848 Test: blockdev copy ...passed 00:07:49.848 00:07:49.848 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.848 suites 7 7 n/a 0 0 00:07:49.848 tests 161 161 161 0 0 00:07:49.848 asserts 1025 1025 1025 0 n/a 00:07:49.848 00:07:49.848 Elapsed time = 0.419 seconds 00:07:49.848 0 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74315 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74315 ']' 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74315 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74315 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.848 killing process with pid 74315 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74315' 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74315 00:07:49.848 05:55:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74315 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:50.106 ************************************ 00:07:50.106 END TEST bdev_bounds 00:07:50.106 ************************************ 00:07:50.106 00:07:50.106 real 0m1.627s 00:07:50.106 user 0m4.260s 00:07:50.106 sys 0m0.315s 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:50.106 05:55:13 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:50.106 05:55:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:50.106 05:55:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.106 05:55:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.106 ************************************ 00:07:50.106 START TEST bdev_nbd 00:07:50.106 ************************************ 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74369 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74369 /var/tmp/spdk-nbd.sock 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74369 ']' 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.106 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:50.364 [2024-12-08 05:55:13.206722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.364 [2024-12-08 05:55:13.206936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.364 [2024-12-08 05:55:13.352315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.364 [2024-12-08 05:55:13.388679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:51.298 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.556 1+0 records in 00:07:51.556 1+0 records out 00:07:51.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568072 s, 7.2 MB/s 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:51.556 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.121 1+0 records in 00:07:52.121 1+0 records out 00:07:52.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537211 s, 7.6 MB/s 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.121 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.122 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.122 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.122 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.122 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.379 1+0 records in 00:07:52.379 1+0 records out 00:07:52.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545876 s, 7.5 MB/s 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.379 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.636 1+0 records in 00:07:52.636 1+0 records out 00:07:52.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581274 s, 7.0 MB/s 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.636 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.893 1+0 records in 00:07:52.893 1+0 records out 00:07:52.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437988 s, 9.4 MB/s 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.893 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.894 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.894 05:55:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.894 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.894 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.894 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.151 1+0 records in 00:07:53.151 1+0 records out 00:07:53.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786597 s, 5.2 MB/s 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.151 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.408 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.665 1+0 records in 00:07:53.665 1+0 records out 00:07:53.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781592 s, 5.2 MB/s 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.665 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd0", 00:07:53.923 "bdev_name": "Nvme0n1" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd1", 00:07:53.923 "bdev_name": "Nvme1n1p1" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd2", 00:07:53.923 "bdev_name": "Nvme1n1p2" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd3", 00:07:53.923 "bdev_name": "Nvme2n1" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd4", 00:07:53.923 "bdev_name": "Nvme2n2" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd5", 00:07:53.923 "bdev_name": "Nvme2n3" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd6", 00:07:53.923 "bdev_name": "Nvme3n1" 00:07:53.923 } 00:07:53.923 ]' 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd0", 00:07:53.923 "bdev_name": "Nvme0n1" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd1", 00:07:53.923 "bdev_name": "Nvme1n1p1" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd2", 00:07:53.923 "bdev_name": "Nvme1n1p2" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd3", 00:07:53.923 "bdev_name": "Nvme2n1" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd4", 00:07:53.923 "bdev_name": "Nvme2n2" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd5", 00:07:53.923 "bdev_name": "Nvme2n3" 00:07:53.923 }, 00:07:53.923 { 00:07:53.923 "nbd_device": "/dev/nbd6", 00:07:53.923 "bdev_name": "Nvme3n1" 00:07:53.923 } 00:07:53.923 ]' 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.923 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.181 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.439 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.697 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.955 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.213 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.470 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.728 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.985 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.985 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.985 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.243 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:56.527 /dev/nbd0 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.527 1+0 records in 00:07:56.527 1+0 records out 00:07:56.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007664 s, 5.3 MB/s 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.527 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:56.783 /dev/nbd1 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.783 1+0 records in 00:07:56.783 1+0 records out 00:07:56.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643485 s, 6.4 MB/s 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.783 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:57.040 /dev/nbd10 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.040 1+0 records in 00:07:57.040 1+0 records out 00:07:57.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755337 s, 5.4 MB/s 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.040 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.041 05:55:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.041 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.041 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.041 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:57.297 /dev/nbd11 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.297 1+0 records in 00:07:57.297 1+0 records out 00:07:57.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792233 s, 5.2 MB/s 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.297 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:57.555 /dev/nbd12 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.555 1+0 records in 00:07:57.555 1+0 records out 00:07:57.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763487 s, 5.4 MB/s 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.555 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:57.813 /dev/nbd13 00:07:57.813 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.070 1+0 records in 00:07:58.070 1+0 records out 00:07:58.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623531 s, 6.6 MB/s 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:58.070 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:58.328 /dev/nbd14 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.328 1+0 records in 00:07:58.328 1+0 records out 00:07:58.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903787 s, 4.5 MB/s 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.328 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd0", 00:07:58.586 "bdev_name": "Nvme0n1" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd1", 00:07:58.586 "bdev_name": "Nvme1n1p1" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd10", 00:07:58.586 "bdev_name": "Nvme1n1p2" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd11", 00:07:58.586 "bdev_name": "Nvme2n1" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd12", 00:07:58.586 "bdev_name": "Nvme2n2" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd13", 00:07:58.586 "bdev_name": "Nvme2n3" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd14", 00:07:58.586 "bdev_name": "Nvme3n1" 00:07:58.586 } 00:07:58.586 ]' 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd0", 00:07:58.586 "bdev_name": "Nvme0n1" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd1", 00:07:58.586 "bdev_name": "Nvme1n1p1" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd10", 00:07:58.586 "bdev_name": "Nvme1n1p2" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd11", 00:07:58.586 "bdev_name": "Nvme2n1" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd12", 00:07:58.586 "bdev_name": "Nvme2n2" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd13", 00:07:58.586 "bdev_name": "Nvme2n3" 00:07:58.586 }, 00:07:58.586 { 00:07:58.586 "nbd_device": "/dev/nbd14", 00:07:58.586 "bdev_name": "Nvme3n1" 00:07:58.586 } 00:07:58.586 ]' 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.586 /dev/nbd1 00:07:58.586 /dev/nbd10 00:07:58.586 /dev/nbd11 00:07:58.586 /dev/nbd12 00:07:58.586 /dev/nbd13 00:07:58.586 /dev/nbd14' 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.586 /dev/nbd1 00:07:58.586 /dev/nbd10 00:07:58.586 /dev/nbd11 00:07:58.586 /dev/nbd12 00:07:58.586 /dev/nbd13 00:07:58.586 /dev/nbd14' 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:58.586 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:58.587 256+0 records in 00:07:58.587 256+0 records out 00:07:58.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00870791 s, 120 MB/s 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.587 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.860 256+0 records in 00:07:58.860 256+0 records out 00:07:58.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181905 s, 5.8 MB/s 00:07:58.860 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.861 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.861 256+0 records in 00:07:58.861 256+0 records out 00:07:58.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184617 s, 5.7 MB/s 00:07:58.861 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.861 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:59.118 256+0 records in 00:07:59.118 256+0 records out 00:07:59.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177824 s, 5.9 MB/s 00:07:59.118 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.118 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:59.376 256+0 records in 00:07:59.376 256+0 records out 00:07:59.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178536 s, 5.9 MB/s 00:07:59.376 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.376 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:59.376 256+0 records in 00:07:59.376 256+0 records out 00:07:59.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165159 s, 6.3 MB/s 00:07:59.376 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.376 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:59.634 256+0 records in 00:07:59.634 256+0 records out 00:07:59.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188779 s, 5.6 MB/s 00:07:59.634 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.634 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:59.892 256+0 records in 00:07:59.892 256+0 records out 00:07:59.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184103 s, 5.7 MB/s 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.892 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.150 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.716 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.974 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.233 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.491 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.749 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.006 05:55:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:02.269 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:02.525 malloc_lvol_verify 00:08:02.525 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:02.781 f842ba10-6636-445f-88e0-10e4f5c58d7d 00:08:02.782 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:03.039 eb3a853e-216e-4562-aea2-84e0b3874b39 00:08:03.039 05:55:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:03.296 /dev/nbd0 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:03.296 mke2fs 1.47.0 (5-Feb-2023) 00:08:03.296 Discarding device blocks: 0/4096 done 00:08:03.296 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:03.296 00:08:03.296 Allocating group tables: 0/1 done 00:08:03.296 Writing inode tables: 0/1 done 00:08:03.296 Creating journal (1024 blocks): done 00:08:03.296 Writing superblocks and filesystem accounting information: 0/1 done 00:08:03.296 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.296 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74369 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74369 ']' 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74369 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74369 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.554 killing process with pid 74369 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74369' 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74369 00:08:03.554 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74369 00:08:03.812 ************************************ 00:08:03.812 END TEST bdev_nbd 00:08:03.812 ************************************ 00:08:03.812 05:55:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:03.812 00:08:03.812 real 0m13.674s 00:08:03.812 user 0m20.048s 00:08:03.812 sys 0m4.533s 00:08:03.812 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.812 05:55:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:03.812 05:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:03.812 05:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:03.812 05:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:03.812 skipping fio tests on NVMe due to multi-ns failures. 00:08:03.812 05:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:03.812 05:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:03.812 05:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:03.812 05:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:03.812 05:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.812 05:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.812 ************************************ 00:08:03.812 START TEST bdev_verify 00:08:03.812 ************************************ 00:08:03.812 05:55:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.070 [2024-12-08 05:55:26.950523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:04.071 [2024-12-08 05:55:26.950718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74806 ] 00:08:04.071 [2024-12-08 05:55:27.097504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.328 [2024-12-08 05:55:27.133013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.328 [2024-12-08 05:55:27.133095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.585 Running I/O for 5 seconds... 00:08:06.893 19456.00 IOPS, 76.00 MiB/s [2024-12-08T05:55:30.876Z] 18944.00 IOPS, 74.00 MiB/s [2024-12-08T05:55:32.255Z] 18581.33 IOPS, 72.58 MiB/s [2024-12-08T05:55:32.823Z] 18416.00 IOPS, 71.94 MiB/s [2024-12-08T05:55:32.823Z] 18214.40 IOPS, 71.15 MiB/s 00:08:09.778 Latency(us) 00:08:09.778 [2024-12-08T05:55:32.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.778 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0xbd0bd 00:08:09.778 Nvme0n1 : 5.05 1318.33 5.15 0.00 0.00 96658.44 19899.11 89128.96 00:08:09.778 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:09.778 Nvme0n1 : 5.06 1239.30 4.84 0.00 0.00 102944.11 20852.36 109623.85 00:08:09.778 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0x4ff80 00:08:09.778 Nvme1n1p1 : 5.08 1323.15 5.17 0.00 0.00 96142.35 9770.82 85792.58 00:08:09.778 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:09.778 Nvme1n1p1 : 5.06 1238.80 4.84 0.00 0.00 102782.14 23235.49 102951.10 00:08:09.778 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0x4ff7f 00:08:09.778 Nvme1n1p2 : 5.08 1321.99 5.16 0.00 0.00 95980.03 12094.37 81979.58 00:08:09.778 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:09.778 Nvme1n1p2 : 5.06 1238.32 4.84 0.00 0.00 102580.24 24665.37 102951.10 00:08:09.778 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0x80000 00:08:09.778 Nvme2n1 : 5.09 1321.10 5.16 0.00 0.00 95875.31 14000.87 77689.95 00:08:09.778 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x80000 length 0x80000 00:08:09.778 Nvme2n1 : 5.07 1237.87 4.84 0.00 0.00 102414.76 25856.93 103904.35 00:08:09.778 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0x80000 00:08:09.778 Nvme2n2 : 5.10 1329.86 5.19 0.00 0.00 95352.97 9592.09 81979.58 00:08:09.778 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x80000 length 0x80000 00:08:09.778 Nvme2n2 : 5.08 1246.37 4.87 0.00 0.00 101600.03 4140.68 106287.48 00:08:09.778 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0x80000 00:08:09.778 Nvme2n3 : 5.10 1329.33 5.19 0.00 0.00 95233.05 10009.13 85315.96 00:08:09.778 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x80000 length 0x80000 00:08:09.778 Nvme2n3 : 5.09 1245.53 4.87 0.00 0.00 101486.79 6136.55 108670.60 00:08:09.778 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x0 length 0x20000 00:08:09.778 Nvme3n1 : 5.11 1328.88 5.19 0.00 0.00 95118.26 10366.60 88175.71 00:08:09.778 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.778 Verification LBA range: start 0x20000 length 0x20000 00:08:09.778 Nvme3n1 : 5.10 1254.82 4.90 0.00 0.00 100730.30 8102.63 110100.48 00:08:09.778 [2024-12-08T05:55:32.823Z] =================================================================================================================== 00:08:09.778 [2024-12-08T05:55:32.823Z] Total : 17973.62 70.21 0.00 0.00 98813.03 4140.68 110100.48 00:08:10.347 00:08:10.347 real 0m6.306s 00:08:10.347 user 0m11.763s 00:08:10.347 sys 0m0.231s 00:08:10.347 05:55:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.347 ************************************ 00:08:10.347 END TEST bdev_verify 00:08:10.347 ************************************ 00:08:10.347 05:55:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 05:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.347 05:55:33 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:10.347 05:55:33 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.347 05:55:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 ************************************ 00:08:10.347 START TEST bdev_verify_big_io 00:08:10.347 ************************************ 00:08:10.347 05:55:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.347 [2024-12-08 05:55:33.299273] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.347 [2024-12-08 05:55:33.299505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74899 ] 00:08:10.606 [2024-12-08 05:55:33.448564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.606 [2024-12-08 05:55:33.484511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.606 [2024-12-08 05:55:33.484589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.174 Running I/O for 5 seconds... 00:08:16.239 774.00 IOPS, 48.38 MiB/s [2024-12-08T05:55:40.218Z] 2202.00 IOPS, 137.62 MiB/s [2024-12-08T05:55:40.218Z] 2975.33 IOPS, 185.96 MiB/s 00:08:17.173 Latency(us) 00:08:17.173 [2024-12-08T05:55:40.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.173 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0xbd0b 00:08:17.173 Nvme0n1 : 5.89 92.16 5.76 0.00 0.00 1320344.89 28835.84 1265917.21 00:08:17.173 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:17.173 Nvme0n1 : 5.62 113.94 7.12 0.00 0.00 1077817.72 20852.36 1273543.21 00:08:17.173 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0x4ff8 00:08:17.173 Nvme1n1p1 : 5.95 96.86 6.05 0.00 0.00 1238358.37 51475.55 1105771.05 00:08:17.173 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:17.173 Nvme1n1p1 : 5.74 115.54 7.22 0.00 0.00 1027255.82 103904.35 1090519.04 00:08:17.173 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0x4ff7 00:08:17.173 Nvme1n1p2 : 5.95 96.81 6.05 0.00 0.00 1202076.91 51713.86 1121023.07 00:08:17.173 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:17.173 Nvme1n1p2 : 5.85 120.31 7.52 0.00 0.00 961733.01 109623.85 922746.88 00:08:17.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0x8000 00:08:17.173 Nvme2n1 : 6.03 101.46 6.34 0.00 0.00 1116421.53 37891.72 1159153.11 00:08:17.173 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x8000 length 0x8000 00:08:17.173 Nvme2n1 : 6.01 123.57 7.72 0.00 0.00 903690.52 77689.95 957063.91 00:08:17.173 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0x8000 00:08:17.173 Nvme2n2 : 6.03 102.26 6.39 0.00 0.00 1073429.85 37891.72 1182031.13 00:08:17.173 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x8000 length 0x8000 00:08:17.173 Nvme2n2 : 6.02 127.68 7.98 0.00 0.00 856266.01 78643.20 991380.95 00:08:17.173 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0x8000 00:08:17.173 Nvme2n3 : 6.04 106.03 6.63 0.00 0.00 1005810.97 42181.35 1204909.15 00:08:17.173 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x8000 length 0x8000 00:08:17.173 Nvme2n3 : 6.11 87.60 5.48 0.00 0.00 1208225.48 22997.18 2303054.20 00:08:17.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x0 length 0x2000 00:08:17.173 Nvme3n1 : 6.12 125.58 7.85 0.00 0.00 825313.85 1057.51 1235413.18 00:08:17.173 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.173 Verification LBA range: start 0x2000 length 0x2000 00:08:17.173 Nvme3n1 : 6.14 108.59 6.79 0.00 0.00 950705.00 3753.43 2318306.21 00:08:17.173 [2024-12-08T05:55:40.218Z] =================================================================================================================== 00:08:17.173 [2024-12-08T05:55:40.218Z] Total : 1518.40 94.90 0.00 0.00 1038805.07 1057.51 2318306.21 00:08:17.741 00:08:17.741 real 0m7.386s 00:08:17.741 user 0m13.954s 00:08:17.741 sys 0m0.239s 00:08:17.741 05:55:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.741 05:55:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:17.741 ************************************ 00:08:17.741 END TEST bdev_verify_big_io 00:08:17.741 ************************************ 00:08:17.741 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:17.741 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:17.741 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.741 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:17.741 ************************************ 00:08:17.741 START TEST bdev_write_zeroes 00:08:17.741 ************************************ 00:08:17.741 05:55:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:17.741 [2024-12-08 05:55:40.719731] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:17.741 [2024-12-08 05:55:40.719911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74997 ] 00:08:18.000 [2024-12-08 05:55:40.862414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.000 [2024-12-08 05:55:40.897863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.258 Running I/O for 1 seconds... 00:08:19.629 49280.00 IOPS, 192.50 MiB/s 00:08:19.629 Latency(us) 00:08:19.629 [2024-12-08T05:55:42.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.629 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme0n1 : 1.04 6955.54 27.17 0.00 0.00 18328.52 8698.41 34555.35 00:08:19.629 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme1n1p1 : 1.04 6944.35 27.13 0.00 0.00 18325.78 13345.51 34317.03 00:08:19.629 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme1n1p2 : 1.04 6933.33 27.08 0.00 0.00 18298.06 13762.56 33840.41 00:08:19.629 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme2n1 : 1.04 6923.06 27.04 0.00 0.00 18240.73 10783.65 32887.16 00:08:19.629 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme2n2 : 1.05 6912.99 27.00 0.00 0.00 18236.51 10366.60 32648.84 00:08:19.629 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme2n3 : 1.05 6902.73 26.96 0.00 0.00 18229.35 9889.98 32887.16 00:08:19.629 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.629 Nvme3n1 : 1.05 6892.73 26.92 0.00 0.00 18223.28 9413.35 35270.28 00:08:19.629 [2024-12-08T05:55:42.674Z] =================================================================================================================== 00:08:19.629 [2024-12-08T05:55:42.674Z] Total : 48464.73 189.32 0.00 0.00 18268.89 8698.41 35270.28 00:08:19.629 00:08:19.629 real 0m1.949s 00:08:19.629 user 0m1.655s 00:08:19.629 sys 0m0.179s 00:08:19.629 05:55:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.629 05:55:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:19.629 ************************************ 00:08:19.629 END TEST bdev_write_zeroes 00:08:19.629 ************************************ 00:08:19.629 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:19.629 05:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:19.629 05:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.629 05:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.629 ************************************ 00:08:19.629 START TEST bdev_json_nonenclosed 00:08:19.629 ************************************ 00:08:19.629 05:55:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:19.887 [2024-12-08 05:55:42.747577] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:19.887 [2024-12-08 05:55:42.747762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75039 ] 00:08:19.887 [2024-12-08 05:55:42.898145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.146 [2024-12-08 05:55:42.940862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.146 [2024-12-08 05:55:42.941021] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:20.146 [2024-12-08 05:55:42.941063] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:20.146 [2024-12-08 05:55:42.941082] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.146 00:08:20.146 real 0m0.417s 00:08:20.146 user 0m0.203s 00:08:20.146 sys 0m0.109s 00:08:20.146 05:55:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.146 05:55:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:20.146 ************************************ 00:08:20.146 END TEST bdev_json_nonenclosed 00:08:20.146 ************************************ 00:08:20.146 05:55:43 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.146 05:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:20.146 05:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.146 05:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.146 ************************************ 00:08:20.146 START TEST bdev_json_nonarray 00:08:20.146 ************************************ 00:08:20.146 05:55:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.405 [2024-12-08 05:55:43.214077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:20.405 [2024-12-08 05:55:43.214295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75059 ] 00:08:20.405 [2024-12-08 05:55:43.365761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.405 [2024-12-08 05:55:43.409053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.405 [2024-12-08 05:55:43.409247] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:20.405 [2024-12-08 05:55:43.409293] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:20.405 [2024-12-08 05:55:43.409315] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.663 00:08:20.663 real 0m0.408s 00:08:20.663 user 0m0.181s 00:08:20.663 sys 0m0.122s 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:20.663 ************************************ 00:08:20.663 END TEST bdev_json_nonarray 00:08:20.663 ************************************ 00:08:20.663 05:55:43 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:20.663 05:55:43 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:20.663 05:55:43 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:20.663 05:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.663 05:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.663 05:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.663 ************************************ 00:08:20.663 START TEST bdev_gpt_uuid 00:08:20.663 ************************************ 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75089 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75089 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 75089 ']' 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.663 05:55:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:20.664 [2024-12-08 05:55:43.701557] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:20.664 [2024-12-08 05:55:43.701771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75089 ] 00:08:20.923 [2024-12-08 05:55:43.853306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.923 [2024-12-08 05:55:43.896556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.862 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.862 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:21.862 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:21.862 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.862 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.132 Some configs were skipped because the RPC state that can call them passed over. 00:08:22.132 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.132 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:22.132 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.132 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.132 05:55:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:22.132 { 00:08:22.132 "name": "Nvme1n1p1", 00:08:22.132 "aliases": [ 00:08:22.132 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:22.132 ], 00:08:22.132 "product_name": "GPT Disk", 00:08:22.132 "block_size": 4096, 00:08:22.132 "num_blocks": 655104, 00:08:22.132 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:22.132 "assigned_rate_limits": { 00:08:22.132 "rw_ios_per_sec": 0, 00:08:22.132 "rw_mbytes_per_sec": 0, 00:08:22.132 "r_mbytes_per_sec": 0, 00:08:22.132 "w_mbytes_per_sec": 0 00:08:22.132 }, 00:08:22.132 "claimed": false, 00:08:22.132 "zoned": false, 00:08:22.132 "supported_io_types": { 00:08:22.132 "read": true, 00:08:22.132 "write": true, 00:08:22.132 "unmap": true, 00:08:22.132 "flush": true, 00:08:22.132 "reset": true, 00:08:22.132 "nvme_admin": false, 00:08:22.132 "nvme_io": false, 00:08:22.132 "nvme_io_md": false, 00:08:22.132 "write_zeroes": true, 00:08:22.132 "zcopy": false, 00:08:22.132 "get_zone_info": false, 00:08:22.132 "zone_management": false, 00:08:22.132 "zone_append": false, 00:08:22.132 "compare": true, 00:08:22.132 "compare_and_write": false, 00:08:22.132 "abort": true, 00:08:22.132 "seek_hole": false, 00:08:22.132 "seek_data": false, 00:08:22.132 "copy": true, 00:08:22.132 "nvme_iov_md": false 00:08:22.132 }, 00:08:22.132 "driver_specific": { 00:08:22.132 "gpt": { 00:08:22.132 "base_bdev": "Nvme1n1", 00:08:22.132 "offset_blocks": 256, 00:08:22.132 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:22.132 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:22.132 "partition_name": "SPDK_TEST_first" 00:08:22.132 } 00:08:22.132 } 00:08:22.132 } 00:08:22.132 ]' 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:22.132 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:22.430 { 00:08:22.430 "name": "Nvme1n1p2", 00:08:22.430 "aliases": [ 00:08:22.430 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:22.430 ], 00:08:22.430 "product_name": "GPT Disk", 00:08:22.430 "block_size": 4096, 00:08:22.430 "num_blocks": 655103, 00:08:22.430 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:22.430 "assigned_rate_limits": { 00:08:22.430 "rw_ios_per_sec": 0, 00:08:22.430 "rw_mbytes_per_sec": 0, 00:08:22.430 "r_mbytes_per_sec": 0, 00:08:22.430 "w_mbytes_per_sec": 0 00:08:22.430 }, 00:08:22.430 "claimed": false, 00:08:22.430 "zoned": false, 00:08:22.430 "supported_io_types": { 00:08:22.430 "read": true, 00:08:22.430 "write": true, 00:08:22.430 "unmap": true, 00:08:22.430 "flush": true, 00:08:22.430 "reset": true, 00:08:22.430 "nvme_admin": false, 00:08:22.430 "nvme_io": false, 00:08:22.430 "nvme_io_md": false, 00:08:22.430 "write_zeroes": true, 00:08:22.430 "zcopy": false, 00:08:22.430 "get_zone_info": false, 00:08:22.430 "zone_management": false, 00:08:22.430 "zone_append": false, 00:08:22.430 "compare": true, 00:08:22.430 "compare_and_write": false, 00:08:22.430 "abort": true, 00:08:22.430 "seek_hole": false, 00:08:22.430 "seek_data": false, 00:08:22.430 "copy": true, 00:08:22.430 "nvme_iov_md": false 00:08:22.430 }, 00:08:22.430 "driver_specific": { 00:08:22.430 "gpt": { 00:08:22.430 "base_bdev": "Nvme1n1", 00:08:22.430 "offset_blocks": 655360, 00:08:22.430 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:22.430 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:22.430 "partition_name": "SPDK_TEST_second" 00:08:22.430 } 00:08:22.430 } 00:08:22.430 } 00:08:22.430 ]' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75089 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 75089 ']' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 75089 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75089 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.430 killing process with pid 75089 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75089' 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 75089 00:08:22.430 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 75089 00:08:22.688 00:08:22.688 real 0m2.117s 00:08:22.688 user 0m2.531s 00:08:22.688 sys 0m0.379s 00:08:22.688 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.688 05:55:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.688 ************************************ 00:08:22.688 END TEST bdev_gpt_uuid 00:08:22.688 ************************************ 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:22.946 05:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:23.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.204 Waiting for block devices as requested 00:08:23.463 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.463 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.463 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.721 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.986 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:28.986 05:55:51 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:28.986 05:55:51 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:28.986 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:28.986 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:28.986 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:28.986 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:28.986 05:55:51 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:28.986 00:08:28.986 real 0m52.898s 00:08:28.986 user 1m8.198s 00:08:28.986 sys 0m9.250s 00:08:28.986 05:55:51 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.986 ************************************ 00:08:28.986 END TEST blockdev_nvme_gpt 00:08:28.986 ************************************ 00:08:28.986 05:55:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:28.986 05:55:51 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:28.986 05:55:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.986 05:55:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.986 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:08:28.986 ************************************ 00:08:28.986 START TEST nvme 00:08:28.986 ************************************ 00:08:28.986 05:55:51 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:28.986 * Looking for test storage... 00:08:29.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.245 05:55:52 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.245 05:55:52 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.245 05:55:52 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.245 05:55:52 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.245 05:55:52 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.245 05:55:52 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:29.245 05:55:52 nvme -- scripts/common.sh@345 -- # : 1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.245 05:55:52 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.245 05:55:52 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@353 -- # local d=1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.245 05:55:52 nvme -- scripts/common.sh@355 -- # echo 1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.245 05:55:52 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@353 -- # local d=2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.245 05:55:52 nvme -- scripts/common.sh@355 -- # echo 2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.245 05:55:52 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.245 05:55:52 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.245 05:55:52 nvme -- scripts/common.sh@368 -- # return 0 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.245 --rc genhtml_branch_coverage=1 00:08:29.245 --rc genhtml_function_coverage=1 00:08:29.245 --rc genhtml_legend=1 00:08:29.245 --rc geninfo_all_blocks=1 00:08:29.245 --rc geninfo_unexecuted_blocks=1 00:08:29.245 00:08:29.245 ' 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.245 --rc genhtml_branch_coverage=1 00:08:29.245 --rc genhtml_function_coverage=1 00:08:29.245 --rc genhtml_legend=1 00:08:29.245 --rc geninfo_all_blocks=1 00:08:29.245 --rc geninfo_unexecuted_blocks=1 00:08:29.245 00:08:29.245 ' 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.245 --rc genhtml_branch_coverage=1 00:08:29.245 --rc genhtml_function_coverage=1 00:08:29.245 --rc genhtml_legend=1 00:08:29.245 --rc geninfo_all_blocks=1 00:08:29.245 --rc geninfo_unexecuted_blocks=1 00:08:29.245 00:08:29.245 ' 00:08:29.245 05:55:52 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.245 --rc genhtml_branch_coverage=1 00:08:29.245 --rc genhtml_function_coverage=1 00:08:29.245 --rc genhtml_legend=1 00:08:29.245 --rc geninfo_all_blocks=1 00:08:29.245 --rc geninfo_unexecuted_blocks=1 00:08:29.245 00:08:29.245 ' 00:08:29.245 05:55:52 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:29.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:30.378 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.378 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.378 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.378 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.378 05:55:53 nvme -- nvme/nvme.sh@79 -- # uname 00:08:30.378 05:55:53 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:30.378 05:55:53 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:30.378 05:55:53 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1071 -- # stubpid=75714 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:30.378 Waiting for stub to ready for secondary processes... 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75714 ]] 00:08:30.378 05:55:53 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:30.378 [2024-12-08 05:55:53.401869] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:30.378 [2024-12-08 05:55:53.402055] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:31.315 [2024-12-08 05:55:54.179137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.315 [2024-12-08 05:55:54.205699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.315 [2024-12-08 05:55:54.205776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.315 [2024-12-08 05:55:54.205848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.315 [2024-12-08 05:55:54.219777] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:31.315 [2024-12-08 05:55:54.219947] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.315 [2024-12-08 05:55:54.231938] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:31.315 [2024-12-08 05:55:54.232232] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:31.315 [2024-12-08 05:55:54.233015] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.315 [2024-12-08 05:55:54.233314] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:31.315 [2024-12-08 05:55:54.233513] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:31.315 [2024-12-08 05:55:54.234168] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.315 [2024-12-08 05:55:54.234468] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:31.315 [2024-12-08 05:55:54.234638] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:31.315 [2024-12-08 05:55:54.235579] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.315 [2024-12-08 05:55:54.235866] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:31.315 [2024-12-08 05:55:54.236062] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:31.315 [2024-12-08 05:55:54.236201] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:31.315 [2024-12-08 05:55:54.236379] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:31.574 05:55:54 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:31.574 done. 00:08:31.574 05:55:54 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:31.574 05:55:54 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:31.574 05:55:54 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:31.574 05:55:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.574 05:55:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.575 ************************************ 00:08:31.575 START TEST nvme_reset 00:08:31.575 ************************************ 00:08:31.575 05:55:54 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:31.834 Initializing NVMe Controllers 00:08:31.834 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:31.834 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:31.834 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:31.834 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:31.834 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:31.834 00:08:31.834 real 0m0.274s 00:08:31.834 user 0m0.093s 00:08:31.834 sys 0m0.136s 00:08:31.834 05:55:54 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.834 05:55:54 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:31.834 ************************************ 00:08:31.834 END TEST nvme_reset 00:08:31.834 ************************************ 00:08:31.834 05:55:54 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:31.834 05:55:54 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.834 05:55:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.834 05:55:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.834 ************************************ 00:08:31.834 START TEST nvme_identify 00:08:31.834 ************************************ 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:31.834 05:55:54 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:31.834 05:55:54 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:31.834 05:55:54 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:31.834 05:55:54 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:31.834 05:55:54 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:31.834 05:55:54 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:32.111 [2024-12-08 05:55:54.974593] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75736 terminated unexpected 00:08:32.111 ===================================================== 00:08:32.111 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.111 ===================================================== 00:08:32.111 Controller Capabilities/Features 00:08:32.111 ================================ 00:08:32.111 Vendor ID: 1b36 00:08:32.111 Subsystem Vendor ID: 1af4 00:08:32.111 Serial Number: 12340 00:08:32.111 Model Number: QEMU NVMe Ctrl 00:08:32.111 Firmware Version: 8.0.0 00:08:32.111 Recommended Arb Burst: 6 00:08:32.111 IEEE OUI Identifier: 00 54 52 00:08:32.111 Multi-path I/O 00:08:32.111 May have multiple subsystem ports: No 00:08:32.111 May have multiple controllers: No 00:08:32.111 Associated with SR-IOV VF: No 00:08:32.111 Max Data Transfer Size: 524288 00:08:32.111 Max Number of Namespaces: 256 00:08:32.111 Max Number of I/O Queues: 64 00:08:32.111 NVMe Specification Version (VS): 1.4 00:08:32.111 NVMe Specification Version (Identify): 1.4 00:08:32.111 Maximum Queue Entries: 2048 00:08:32.111 Contiguous Queues Required: Yes 00:08:32.111 Arbitration Mechanisms Supported 00:08:32.111 Weighted Round Robin: Not Supported 00:08:32.111 Vendor Specific: Not Supported 00:08:32.111 Reset Timeout: 7500 ms 00:08:32.111 Doorbell Stride: 4 bytes 00:08:32.111 NVM Subsystem Reset: Not Supported 00:08:32.111 Command Sets Supported 00:08:32.111 NVM Command Set: Supported 00:08:32.111 Boot Partition: Not Supported 00:08:32.111 Memory Page Size Minimum: 4096 bytes 00:08:32.111 Memory Page Size Maximum: 65536 bytes 00:08:32.111 Persistent Memory Region: Not Supported 00:08:32.111 Optional Asynchronous Events Supported 00:08:32.111 Namespace Attribute Notices: Supported 00:08:32.111 Firmware Activation Notices: Not Supported 00:08:32.111 ANA Change Notices: Not Supported 00:08:32.111 PLE Aggregate Log Change Notices: Not Supported 00:08:32.111 LBA Status Info Alert Notices: Not Supported 00:08:32.111 EGE Aggregate Log Change Notices: Not Supported 00:08:32.111 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.111 Zone Descriptor Change Notices: Not Supported 00:08:32.111 Discovery Log Change Notices: Not Supported 00:08:32.111 Controller Attributes 00:08:32.111 128-bit Host Identifier: Not Supported 00:08:32.111 Non-Operational Permissive Mode: Not Supported 00:08:32.111 NVM Sets: Not Supported 00:08:32.111 Read Recovery Levels: Not Supported 00:08:32.111 Endurance Groups: Not Supported 00:08:32.111 Predictable Latency Mode: Not Supported 00:08:32.111 Traffic Based Keep ALive: Not Supported 00:08:32.111 Namespace Granularity: Not Supported 00:08:32.111 SQ Associations: Not Supported 00:08:32.111 UUID List: Not Supported 00:08:32.111 Multi-Domain Subsystem: Not Supported 00:08:32.111 Fixed Capacity Management: Not Supported 00:08:32.111 Variable Capacity Management: Not Supported 00:08:32.111 Delete Endurance Group: Not Supported 00:08:32.111 Delete NVM Set: Not Supported 00:08:32.111 Extended LBA Formats Supported: Supported 00:08:32.111 Flexible Data Placement Supported: Not Supported 00:08:32.111 00:08:32.111 Controller Memory Buffer Support 00:08:32.111 ================================ 00:08:32.111 Supported: No 00:08:32.112 00:08:32.112 Persistent Memory Region Support 00:08:32.112 ================================ 00:08:32.112 Supported: No 00:08:32.112 00:08:32.112 Admin Command Set Attributes 00:08:32.112 ============================ 00:08:32.112 Security Send/Receive: Not Supported 00:08:32.112 Format NVM: Supported 00:08:32.112 Firmware Activate/Download: Not Supported 00:08:32.112 Namespace Management: Supported 00:08:32.112 Device Self-Test: Not Supported 00:08:32.112 Directives: Supported 00:08:32.112 NVMe-MI: Not Supported 00:08:32.112 Virtualization Management: Not Supported 00:08:32.112 Doorbell Buffer Config: Supported 00:08:32.112 Get LBA Status Capability: Not Supported 00:08:32.112 Command & Feature Lockdown Capability: Not Supported 00:08:32.112 Abort Command Limit: 4 00:08:32.112 Async Event Request Limit: 4 00:08:32.112 Number of Firmware Slots: N/A 00:08:32.112 Firmware Slot 1 Read-Only: N/A 00:08:32.112 Firmware Activation Without Reset: N/A 00:08:32.112 Multiple Update Detection Support: N/A 00:08:32.112 Firmware Update Granularity: No Information Provided 00:08:32.112 Per-Namespace SMART Log: Yes 00:08:32.112 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.112 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:32.112 Command Effects Log Page: Supported 00:08:32.112 Get Log Page Extended Data: Supported 00:08:32.112 Telemetry Log Pages: Not Supported 00:08:32.112 Persistent Event Log Pages: Not Supported 00:08:32.112 Supported Log Pages Log Page: May Support 00:08:32.112 Commands Supported & Effects Log Page: Not Supported 00:08:32.112 Feature Identifiers & Effects Log Page:May Support 00:08:32.112 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.112 Data Area 4 for Telemetry Log: Not Supported 00:08:32.112 Error Log Page Entries Supported: 1 00:08:32.112 Keep Alive: Not Supported 00:08:32.112 00:08:32.112 NVM Command Set Attributes 00:08:32.112 ========================== 00:08:32.112 Submission Queue Entry Size 00:08:32.112 Max: 64 00:08:32.112 Min: 64 00:08:32.112 Completion Queue Entry Size 00:08:32.112 Max: 16 00:08:32.112 Min: 16 00:08:32.112 Number of Namespaces: 256 00:08:32.112 Compare Command: Supported 00:08:32.112 Write Uncorrectable Command: Not Supported 00:08:32.112 Dataset Management Command: Supported 00:08:32.112 Write Zeroes Command: Supported 00:08:32.112 Set Features Save Field: Supported 00:08:32.112 Reservations: Not Supported 00:08:32.112 Timestamp: Supported 00:08:32.112 Copy: Supported 00:08:32.112 Volatile Write Cache: Present 00:08:32.112 Atomic Write Unit (Normal): 1 00:08:32.112 Atomic Write Unit (PFail): 1 00:08:32.112 Atomic Compare & Write Unit: 1 00:08:32.112 Fused Compare & Write: Not Supported 00:08:32.112 Scatter-Gather List 00:08:32.112 SGL Command Set: Supported 00:08:32.112 SGL Keyed: Not Supported 00:08:32.112 SGL Bit Bucket Descriptor: Not Supported 00:08:32.112 SGL Metadata Pointer: Not Supported 00:08:32.112 Oversized SGL: Not Supported 00:08:32.112 SGL Metadata Address: Not Supported 00:08:32.112 SGL Offset: Not Supported 00:08:32.112 Transport SGL Data Block: Not Supported 00:08:32.112 Replay Protected Memory Block: Not Supported 00:08:32.112 00:08:32.112 Firmware Slot Information 00:08:32.112 ========================= 00:08:32.112 Active slot: 1 00:08:32.112 Slot 1 Firmware Revision: 1.0 00:08:32.112 00:08:32.112 00:08:32.112 Commands Supported and Effects 00:08:32.112 ============================== 00:08:32.112 Admin Commands 00:08:32.112 -------------- 00:08:32.112 Delete I/O Submission Queue (00h): Supported 00:08:32.112 Create I/O Submission Queue (01h): Supported 00:08:32.112 Get Log Page (02h): Supported 00:08:32.112 Delete I/O Completion Queue (04h): Supported 00:08:32.112 Create I/O Completion Queue (05h): Supported 00:08:32.112 Identify (06h): Supported 00:08:32.112 Abort (08h): Supported 00:08:32.112 Set Features (09h): Supported 00:08:32.112 Get Features (0Ah): Supported 00:08:32.112 Asynchronous Event Request (0Ch): Supported 00:08:32.112 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.112 Directive Send (19h): Supported 00:08:32.112 Directive Receive (1Ah): Supported 00:08:32.112 Virtualization Management (1Ch): Supported 00:08:32.112 Doorbell Buffer Config (7Ch): Supported 00:08:32.112 Format NVM (80h): Supported LBA-Change 00:08:32.112 I/O Commands 00:08:32.112 ------------ 00:08:32.112 Flush (00h): Supported LBA-Change 00:08:32.112 Write (01h): Supported LBA-Change 00:08:32.112 Read (02h): Supported 00:08:32.112 Compare (05h): Supported 00:08:32.112 Write Zeroes (08h): Supported LBA-Change 00:08:32.112 Dataset Management (09h): Supported LBA-Change 00:08:32.112 Unknown (0Ch): Supported 00:08:32.112 Unknown (12h): Supported 00:08:32.112 Copy (19h): Supported LBA-Change 00:08:32.112 Unknown (1Dh): Supported LBA-Change 00:08:32.112 00:08:32.112 Error Log 00:08:32.112 ========= 00:08:32.112 00:08:32.112 Arbitration 00:08:32.112 =========== 00:08:32.112 Arbitration Burst: no limit 00:08:32.112 00:08:32.112 Power Management 00:08:32.112 ================ 00:08:32.112 Number of Power States: 1 00:08:32.112 Current Power State: Power State #0 00:08:32.112 Power State #0: 00:08:32.112 Max Power: 25.00 W 00:08:32.112 Non-Operational State: Operational 00:08:32.112 Entry Latency: 16 microseconds 00:08:32.112 Exit Latency: 4 microseconds 00:08:32.112 Relative Read Throughput: 0 00:08:32.112 Relative Read Latency: 0 00:08:32.112 Relative Write Throughput: 0 00:08:32.112 Relative Write Latency: 0 00:08:32.112 Idle Power[2024-12-08 05:55:54.976133] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75736 terminated unexpected 00:08:32.112 : Not Reported 00:08:32.112 Active Power: Not Reported 00:08:32.112 Non-Operational Permissive Mode: Not Supported 00:08:32.112 00:08:32.112 Health Information 00:08:32.112 ================== 00:08:32.112 Critical Warnings: 00:08:32.112 Available Spare Space: OK 00:08:32.112 Temperature: OK 00:08:32.112 Device Reliability: OK 00:08:32.112 Read Only: No 00:08:32.112 Volatile Memory Backup: OK 00:08:32.112 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.112 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.112 Available Spare: 0% 00:08:32.112 Available Spare Threshold: 0% 00:08:32.112 Life Percentage Used: 0% 00:08:32.112 Data Units Read: 658 00:08:32.112 Data Units Written: 587 00:08:32.112 Host Read Commands: 32403 00:08:32.112 Host Write Commands: 32189 00:08:32.112 Controller Busy Time: 0 minutes 00:08:32.112 Power Cycles: 0 00:08:32.112 Power On Hours: 0 hours 00:08:32.112 Unsafe Shutdowns: 0 00:08:32.112 Unrecoverable Media Errors: 0 00:08:32.112 Lifetime Error Log Entries: 0 00:08:32.112 Warning Temperature Time: 0 minutes 00:08:32.112 Critical Temperature Time: 0 minutes 00:08:32.112 00:08:32.112 Number of Queues 00:08:32.112 ================ 00:08:32.112 Number of I/O Submission Queues: 64 00:08:32.112 Number of I/O Completion Queues: 64 00:08:32.112 00:08:32.112 ZNS Specific Controller Data 00:08:32.112 ============================ 00:08:32.112 Zone Append Size Limit: 0 00:08:32.112 00:08:32.112 00:08:32.112 Active Namespaces 00:08:32.112 ================= 00:08:32.112 Namespace ID:1 00:08:32.112 Error Recovery Timeout: Unlimited 00:08:32.112 Command Set Identifier: NVM (00h) 00:08:32.112 Deallocate: Supported 00:08:32.112 Deallocated/Unwritten Error: Supported 00:08:32.112 Deallocated Read Value: All 0x00 00:08:32.112 Deallocate in Write Zeroes: Not Supported 00:08:32.112 Deallocated Guard Field: 0xFFFF 00:08:32.112 Flush: Supported 00:08:32.112 Reservation: Not Supported 00:08:32.112 Metadata Transferred as: Separate Metadata Buffer 00:08:32.112 Namespace Sharing Capabilities: Private 00:08:32.112 Size (in LBAs): 1548666 (5GiB) 00:08:32.112 Capacity (in LBAs): 1548666 (5GiB) 00:08:32.112 Utilization (in LBAs): 1548666 (5GiB) 00:08:32.112 Thin Provisioning: Not Supported 00:08:32.112 Per-NS Atomic Units: No 00:08:32.112 Maximum Single Source Range Length: 128 00:08:32.112 Maximum Copy Length: 128 00:08:32.112 Maximum Source Range Count: 128 00:08:32.112 NGUID/EUI64 Never Reused: No 00:08:32.112 Namespace Write Protected: No 00:08:32.112 Number of LBA Formats: 8 00:08:32.112 Current LBA Format: LBA Format #07 00:08:32.112 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.112 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.112 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.112 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.112 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.112 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.112 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.112 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.112 00:08:32.112 NVM Specific Namespace Data 00:08:32.112 =========================== 00:08:32.112 Logical Block Storage Tag Mask: 0 00:08:32.112 Protection Information Capabilities: 00:08:32.112 16b Guard Protection Information Storage Tag Support: No 00:08:32.112 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.112 Storage Tag Check Read Support: No 00:08:32.112 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.112 ===================================================== 00:08:32.112 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.112 ===================================================== 00:08:32.112 Controller Capabilities/Features 00:08:32.112 ================================ 00:08:32.112 Vendor ID: 1b36 00:08:32.112 Subsystem Vendor ID: 1af4 00:08:32.112 Serial Number: 12341 00:08:32.112 Model Number: QEMU NVMe Ctrl 00:08:32.112 Firmware Version: 8.0.0 00:08:32.112 Recommended Arb Burst: 6 00:08:32.112 IEEE OUI Identifier: 00 54 52 00:08:32.112 Multi-path I/O 00:08:32.112 May have multiple subsystem ports: No 00:08:32.112 May have multiple controllers: No 00:08:32.112 Associated with SR-IOV VF: No 00:08:32.112 Max Data Transfer Size: 524288 00:08:32.112 Max Number of Namespaces: 256 00:08:32.112 Max Number of I/O Queues: 64 00:08:32.112 NVMe Specification Version (VS): 1.4 00:08:32.112 NVMe Specification Version (Identify): 1.4 00:08:32.112 Maximum Queue Entries: 2048 00:08:32.112 Contiguous Queues Required: Yes 00:08:32.112 Arbitration Mechanisms Supported 00:08:32.113 Weighted Round Robin: Not Supported 00:08:32.113 Vendor Specific: Not Supported 00:08:32.113 Reset Timeout: 7500 ms 00:08:32.113 Doorbell Stride: 4 bytes 00:08:32.113 NVM Subsystem Reset: Not Supported 00:08:32.113 Command Sets Supported 00:08:32.113 NVM Command Set: Supported 00:08:32.113 Boot Partition: Not Supported 00:08:32.113 Memory Page Size Minimum: 4096 bytes 00:08:32.113 Memory Page Size Maximum: 65536 bytes 00:08:32.113 Persistent Memory Region: Not Supported 00:08:32.113 Optional Asynchronous Events Supported 00:08:32.113 Namespace Attribute Notices: Supported 00:08:32.113 Firmware Activation Notices: Not Supported 00:08:32.113 ANA Change Notices: Not Supported 00:08:32.113 PLE Aggregate Log Change Notices: Not Supported 00:08:32.113 LBA Status Info Alert Notices: Not Supported 00:08:32.113 EGE Aggregate Log Change Notices: Not Supported 00:08:32.113 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.113 Zone Descriptor Change Notices: Not Supported 00:08:32.113 Discovery Log Change Notices: Not Supported 00:08:32.113 Controller Attributes 00:08:32.113 128-bit Host Identifier: Not Supported 00:08:32.113 Non-Operational Permissive Mode: Not Supported 00:08:32.113 NVM Sets: Not Supported 00:08:32.113 Read Recovery Levels: Not Supported 00:08:32.113 Endurance Groups: Not Supported 00:08:32.113 Predictable Latency Mode: Not Supported 00:08:32.113 Traffic Based Keep ALive: Not Supported 00:08:32.113 Namespace Granularity: Not Supported 00:08:32.113 SQ Associations: Not Supported 00:08:32.113 UUID List: Not Supported 00:08:32.113 Multi-Domain Subsystem: Not Supported 00:08:32.113 Fixed Capacity Management: Not Supported 00:08:32.113 Variable Capacity Management: Not Supported 00:08:32.113 Delete Endurance Group: Not Supported 00:08:32.113 Delete NVM Set: Not Supported 00:08:32.113 Extended LBA Formats Supported: Supported 00:08:32.113 Flexible Data Placement Supported: Not Supported 00:08:32.113 00:08:32.113 Controller Memory Buffer Support 00:08:32.113 ================================ 00:08:32.113 Supported: No 00:08:32.113 00:08:32.113 Persistent Memory Region Support 00:08:32.113 ================================ 00:08:32.113 Supported: No 00:08:32.113 00:08:32.113 Admin Command Set Attributes 00:08:32.113 ============================ 00:08:32.113 Security Send/Receive: Not Supported 00:08:32.113 Format NVM: Supported 00:08:32.113 Firmware Activate/Download: Not Supported 00:08:32.113 Namespace Management: Supported 00:08:32.113 Device Self-Test: Not Supported 00:08:32.113 Directives: Supported 00:08:32.113 NVMe-MI: Not Supported 00:08:32.113 Virtualization Management: Not Supported 00:08:32.113 Doorbell Buffer Config: Supported 00:08:32.113 Get LBA Status Capability: Not Supported 00:08:32.113 Command & Feature Lockdown Capability: Not Supported 00:08:32.113 Abort Command Limit: 4 00:08:32.113 Async Event Request Limit: 4 00:08:32.113 Number of Firmware Slots: N/A 00:08:32.113 Firmware Slot 1 Read-Only: N/A 00:08:32.113 Firmware Activation Without Reset: N/A 00:08:32.113 Multiple Update Detection Support: N/A 00:08:32.113 Firmware Update Granularity: No Information Provided 00:08:32.113 Per-Namespace SMART Log: Yes 00:08:32.113 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.113 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:32.113 Command Effects Log Page: Supported 00:08:32.113 Get Log Page Extended Data: Supported 00:08:32.113 Telemetry Log Pages: Not Supported 00:08:32.113 Persistent Event Log Pages: Not Supported 00:08:32.113 Supported Log Pages Log Page: May Support 00:08:32.113 Commands Supported & Effects Log Page: Not Supported 00:08:32.113 Feature Identifiers & Effects Log Page:May Support 00:08:32.113 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.113 Data Area 4 for Telemetry Log: Not Supported 00:08:32.113 Error Log Page Entries Supported: 1 00:08:32.113 Keep Alive: Not Supported 00:08:32.113 00:08:32.113 NVM Command Set Attributes 00:08:32.113 ========================== 00:08:32.113 Submission Queue Entry Size 00:08:32.113 Max: 64 00:08:32.113 Min: 64 00:08:32.113 Completion Queue Entry Size 00:08:32.113 Max: 16 00:08:32.113 Min: 16 00:08:32.113 Number of Namespaces: 256 00:08:32.113 Compare Command: Supported 00:08:32.113 Write Uncorrectable Command: Not Supported 00:08:32.113 Dataset Management Command: Supported 00:08:32.113 Write Zeroes Command: Supported 00:08:32.113 Set Features Save Field: Supported 00:08:32.113 Reservations: Not Supported 00:08:32.113 Timestamp: Supported 00:08:32.113 Copy: Supported 00:08:32.113 Volatile Write Cache: Present 00:08:32.113 Atomic Write Unit (Normal): 1 00:08:32.113 Atomic Write Unit (PFail): 1 00:08:32.113 Atomic Compare & Write Unit: 1 00:08:32.113 Fused Compare & Write: Not Supported 00:08:32.113 Scatter-Gather List 00:08:32.113 SGL Command Set: Supported 00:08:32.113 SGL Keyed: Not Supported 00:08:32.113 SGL Bit Bucket Descriptor: Not Supported 00:08:32.113 SGL Metadata Pointer: Not Supported 00:08:32.113 Oversized SGL: Not Supported 00:08:32.113 SGL Metadata Address: Not Supported 00:08:32.113 SGL Offset: Not Supported 00:08:32.113 Transport SGL Data Block: Not Supported 00:08:32.113 Replay Protected Memory Block: Not Supported 00:08:32.113 00:08:32.113 Firmware Slot Information 00:08:32.113 ========================= 00:08:32.113 Active slot: 1 00:08:32.113 Slot 1 Firmware Revision: 1.0 00:08:32.113 00:08:32.113 00:08:32.113 Commands Supported and Effects 00:08:32.113 ============================== 00:08:32.113 Admin Commands 00:08:32.113 -------------- 00:08:32.113 Delete I/O Submission Queue (00h): Supported 00:08:32.113 Create I/O Submission Queue (01h): Supported 00:08:32.113 Get Log Page (02h): Supported 00:08:32.113 Delete I/O Completion Queue (04h): Supported 00:08:32.113 Create I/O Completion Queue (05h): Supported 00:08:32.113 Identify (06h): Supported 00:08:32.113 Abort (08h): Supported 00:08:32.113 Set Features (09h): Supported 00:08:32.113 Get Features (0Ah): Supported 00:08:32.113 Asynchronous Event Request (0Ch): Supported 00:08:32.113 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.113 Directive Send (19h): Supported 00:08:32.113 Directive Receive (1Ah): Supported 00:08:32.113 Virtualization Management (1Ch): Supported 00:08:32.113 Doorbell Buffer Config (7Ch): Supported 00:08:32.113 Format NVM (80h): Supported LBA-Change 00:08:32.113 I/O Commands 00:08:32.113 ------------ 00:08:32.113 Flush (00h): Supported LBA-Change 00:08:32.113 Write (01h): Supported LBA-Change 00:08:32.113 Read (02h): Supported 00:08:32.113 Compare (05h): Supported 00:08:32.113 Write Zeroes (08h): Supported LBA-Change 00:08:32.113 Dataset Management (09h): Supported LBA-Change 00:08:32.113 Unknown (0Ch): Supported 00:08:32.113 Unknown (12h): Supported 00:08:32.113 Copy (19h): Supported LBA-Change 00:08:32.113 Unknown (1Dh): Supported LBA-Change 00:08:32.113 00:08:32.113 Error Log 00:08:32.113 ========= 00:08:32.113 00:08:32.113 Arbitration 00:08:32.113 =========== 00:08:32.113 Arbitration Burst: no limit 00:08:32.113 00:08:32.113 Power Management 00:08:32.113 ================ 00:08:32.113 Number of Power States: 1 00:08:32.113 Current Power State: Power State #0 00:08:32.113 Power State #0: 00:08:32.113 Max Power: 25.00 W 00:08:32.113 Non-Operational State: Operational 00:08:32.113 Entry Latency: 16 microseconds 00:08:32.113 Exit Latency: 4 microseconds 00:08:32.113 Relative Read Throughput: 0 00:08:32.113 Relative Read Latency: 0 00:08:32.113 Relative Write Throughput: 0 00:08:32.113 Relative Write Latency: 0 00:08:32.113 Idle Power: Not Reported 00:08:32.113 Active Power: Not Reported 00:08:32.113 Non-Operational Permissive Mode: Not Supported 00:08:32.113 00:08:32.113 Health Information 00:08:32.113 ================== 00:08:32.113 Critical Warnings: 00:08:32.113 Available Spare Space: OK 00:08:32.113 Temperature: [2024-12-08 05:55:54.977358] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75736 terminated unexpected 00:08:32.113 OK 00:08:32.113 Device Reliability: OK 00:08:32.113 Read Only: No 00:08:32.113 Volatile Memory Backup: OK 00:08:32.113 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.113 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.113 Available Spare: 0% 00:08:32.113 Available Spare Threshold: 0% 00:08:32.113 Life Percentage Used: 0% 00:08:32.113 Data Units Read: 1005 00:08:32.113 Data Units Written: 866 00:08:32.113 Host Read Commands: 47945 00:08:32.113 Host Write Commands: 46614 00:08:32.113 Controller Busy Time: 0 minutes 00:08:32.113 Power Cycles: 0 00:08:32.113 Power On Hours: 0 hours 00:08:32.113 Unsafe Shutdowns: 0 00:08:32.113 Unrecoverable Media Errors: 0 00:08:32.113 Lifetime Error Log Entries: 0 00:08:32.113 Warning Temperature Time: 0 minutes 00:08:32.113 Critical Temperature Time: 0 minutes 00:08:32.113 00:08:32.113 Number of Queues 00:08:32.113 ================ 00:08:32.113 Number of I/O Submission Queues: 64 00:08:32.113 Number of I/O Completion Queues: 64 00:08:32.113 00:08:32.113 ZNS Specific Controller Data 00:08:32.113 ============================ 00:08:32.113 Zone Append Size Limit: 0 00:08:32.113 00:08:32.113 00:08:32.113 Active Namespaces 00:08:32.113 ================= 00:08:32.113 Namespace ID:1 00:08:32.113 Error Recovery Timeout: Unlimited 00:08:32.113 Command Set Identifier: NVM (00h) 00:08:32.113 Deallocate: Supported 00:08:32.113 Deallocated/Unwritten Error: Supported 00:08:32.113 Deallocated Read Value: All 0x00 00:08:32.113 Deallocate in Write Zeroes: Not Supported 00:08:32.113 Deallocated Guard Field: 0xFFFF 00:08:32.113 Flush: Supported 00:08:32.113 Reservation: Not Supported 00:08:32.113 Namespace Sharing Capabilities: Private 00:08:32.113 Size (in LBAs): 1310720 (5GiB) 00:08:32.113 Capacity (in LBAs): 1310720 (5GiB) 00:08:32.113 Utilization (in LBAs): 1310720 (5GiB) 00:08:32.113 Thin Provisioning: Not Supported 00:08:32.113 Per-NS Atomic Units: No 00:08:32.113 Maximum Single Source Range Length: 128 00:08:32.113 Maximum Copy Length: 128 00:08:32.113 Maximum Source Range Count: 128 00:08:32.113 NGUID/EUI64 Never Reused: No 00:08:32.113 Namespace Write Protected: No 00:08:32.113 Number of LBA Formats: 8 00:08:32.113 Current LBA Format: LBA Format #04 00:08:32.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.113 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.113 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.113 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.114 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.114 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.114 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.114 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.114 00:08:32.114 NVM Specific Namespace Data 00:08:32.114 =========================== 00:08:32.114 Logical Block Storage Tag Mask: 0 00:08:32.114 Protection Information Capabilities: 00:08:32.114 16b Guard Protection Information Storage Tag Support: No 00:08:32.114 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.114 Storage Tag Check Read Support: No 00:08:32.114 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.114 ===================================================== 00:08:32.114 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.114 ===================================================== 00:08:32.114 Controller Capabilities/Features 00:08:32.114 ================================ 00:08:32.114 Vendor ID: 1b36 00:08:32.114 Subsystem Vendor ID: 1af4 00:08:32.114 Serial Number: 12343 00:08:32.114 Model Number: QEMU NVMe Ctrl 00:08:32.114 Firmware Version: 8.0.0 00:08:32.114 Recommended Arb Burst: 6 00:08:32.114 IEEE OUI Identifier: 00 54 52 00:08:32.114 Multi-path I/O 00:08:32.114 May have multiple subsystem ports: No 00:08:32.114 May have multiple controllers: Yes 00:08:32.114 Associated with SR-IOV VF: No 00:08:32.114 Max Data Transfer Size: 524288 00:08:32.114 Max Number of Namespaces: 256 00:08:32.114 Max Number of I/O Queues: 64 00:08:32.114 NVMe Specification Version (VS): 1.4 00:08:32.114 NVMe Specification Version (Identify): 1.4 00:08:32.114 Maximum Queue Entries: 2048 00:08:32.114 Contiguous Queues Required: Yes 00:08:32.114 Arbitration Mechanisms Supported 00:08:32.114 Weighted Round Robin: Not Supported 00:08:32.114 Vendor Specific: Not Supported 00:08:32.114 Reset Timeout: 7500 ms 00:08:32.114 Doorbell Stride: 4 bytes 00:08:32.114 NVM Subsystem Reset: Not Supported 00:08:32.114 Command Sets Supported 00:08:32.114 NVM Command Set: Supported 00:08:32.114 Boot Partition: Not Supported 00:08:32.114 Memory Page Size Minimum: 4096 bytes 00:08:32.114 Memory Page Size Maximum: 65536 bytes 00:08:32.114 Persistent Memory Region: Not Supported 00:08:32.114 Optional Asynchronous Events Supported 00:08:32.114 Namespace Attribute Notices: Supported 00:08:32.114 Firmware Activation Notices: Not Supported 00:08:32.114 ANA Change Notices: Not Supported 00:08:32.114 PLE Aggregate Log Change Notices: Not Supported 00:08:32.114 LBA Status Info Alert Notices: Not Supported 00:08:32.114 EGE Aggregate Log Change Notices: Not Supported 00:08:32.114 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.114 Zone Descriptor Change Notices: Not Supported 00:08:32.114 Discovery Log Change Notices: Not Supported 00:08:32.114 Controller Attributes 00:08:32.114 128-bit Host Identifier: Not Supported 00:08:32.114 Non-Operational Permissive Mode: Not Supported 00:08:32.114 NVM Sets: Not Supported 00:08:32.114 Read Recovery Levels: Not Supported 00:08:32.114 Endurance Groups: Supported 00:08:32.114 Predictable Latency Mode: Not Supported 00:08:32.114 Traffic Based Keep ALive: Not Supported 00:08:32.114 Namespace Granularity: Not Supported 00:08:32.114 SQ Associations: Not Supported 00:08:32.114 UUID List: Not Supported 00:08:32.114 Multi-Domain Subsystem: Not Supported 00:08:32.114 Fixed Capacity Management: Not Supported 00:08:32.114 Variable Capacity Management: Not Supported 00:08:32.114 Delete Endurance Group: Not Supported 00:08:32.114 Delete NVM Set: Not Supported 00:08:32.114 Extended LBA Formats Supported: Supported 00:08:32.114 Flexible Data Placement Supported: Supported 00:08:32.114 00:08:32.114 Controller Memory Buffer Support 00:08:32.114 ================================ 00:08:32.114 Supported: No 00:08:32.114 00:08:32.114 Persistent Memory Region Support 00:08:32.114 ================================ 00:08:32.114 Supported: No 00:08:32.114 00:08:32.114 Admin Command Set Attributes 00:08:32.114 ============================ 00:08:32.114 Security Send/Receive: Not Supported 00:08:32.114 Format NVM: Supported 00:08:32.114 Firmware Activate/Download: Not Supported 00:08:32.114 Namespace Management: Supported 00:08:32.114 Device Self-Test: Not Supported 00:08:32.114 Directives: Supported 00:08:32.114 NVMe-MI: Not Supported 00:08:32.114 Virtualization Management: Not Supported 00:08:32.114 Doorbell Buffer Config: Supported 00:08:32.114 Get LBA Status Capability: Not Supported 00:08:32.114 Command & Feature Lockdown Capability: Not Supported 00:08:32.114 Abort Command Limit: 4 00:08:32.114 Async Event Request Limit: 4 00:08:32.114 Number of Firmware Slots: N/A 00:08:32.114 Firmware Slot 1 Read-Only: N/A 00:08:32.114 Firmware Activation Without Reset: N/A 00:08:32.114 Multiple Update Detection Support: N/A 00:08:32.114 Firmware Update Granularity: No Information Provided 00:08:32.114 Per-Namespace SMART Log: Yes 00:08:32.114 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.114 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:32.114 Command Effects Log Page: Supported 00:08:32.114 Get Log Page Extended Data: Supported 00:08:32.114 Telemetry Log Pages: Not Supported 00:08:32.114 Persistent Event Log Pages: Not Supported 00:08:32.114 Supported Log Pages Log Page: May Support 00:08:32.114 Commands Supported & Effects Log Page: Not Supported 00:08:32.114 Feature Identifiers & Effects Log Page:May Support 00:08:32.114 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.114 Data Area 4 for Telemetry Log: Not Supported 00:08:32.114 Error Log Page Entries Supported: 1 00:08:32.114 Keep Alive: Not Supported 00:08:32.114 00:08:32.114 NVM Command Set Attributes 00:08:32.114 ========================== 00:08:32.114 Submission Queue Entry Size 00:08:32.114 Max: 64 00:08:32.114 Min: 64 00:08:32.114 Completion Queue Entry Size 00:08:32.114 Max: 16 00:08:32.114 Min: 16 00:08:32.114 Number of Namespaces: 256 00:08:32.114 Compare Command: Supported 00:08:32.114 Write Uncorrectable Command: Not Supported 00:08:32.114 Dataset Management Command: Supported 00:08:32.114 Write Zeroes Command: Supported 00:08:32.114 Set Features Save Field: Supported 00:08:32.114 Reservations: Not Supported 00:08:32.114 Timestamp: Supported 00:08:32.114 Copy: Supported 00:08:32.114 Volatile Write Cache: Present 00:08:32.114 Atomic Write Unit (Normal): 1 00:08:32.114 Atomic Write Unit (PFail): 1 00:08:32.114 Atomic Compare & Write Unit: 1 00:08:32.114 Fused Compare & Write: Not Supported 00:08:32.114 Scatter-Gather List 00:08:32.114 SGL Command Set: Supported 00:08:32.114 SGL Keyed: Not Supported 00:08:32.114 SGL Bit Bucket Descriptor: Not Supported 00:08:32.114 SGL Metadata Pointer: Not Supported 00:08:32.114 Oversized SGL: Not Supported 00:08:32.114 SGL Metadata Address: Not Supported 00:08:32.114 SGL Offset: Not Supported 00:08:32.114 Transport SGL Data Block: Not Supported 00:08:32.114 Replay Protected Memory Block: Not Supported 00:08:32.114 00:08:32.114 Firmware Slot Information 00:08:32.114 ========================= 00:08:32.114 Active slot: 1 00:08:32.114 Slot 1 Firmware Revision: 1.0 00:08:32.114 00:08:32.114 00:08:32.114 Commands Supported and Effects 00:08:32.114 ============================== 00:08:32.114 Admin Commands 00:08:32.114 -------------- 00:08:32.114 Delete I/O Submission Queue (00h): Supported 00:08:32.114 Create I/O Submission Queue (01h): Supported 00:08:32.114 Get Log Page (02h): Supported 00:08:32.114 Delete I/O Completion Queue (04h): Supported 00:08:32.114 Create I/O Completion Queue (05h): Supported 00:08:32.114 Identify (06h): Supported 00:08:32.114 Abort (08h): Supported 00:08:32.114 Set Features (09h): Supported 00:08:32.114 Get Features (0Ah): Supported 00:08:32.114 Asynchronous Event Request (0Ch): Supported 00:08:32.114 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.114 Directive Send (19h): Supported 00:08:32.114 Directive Receive (1Ah): Supported 00:08:32.114 Virtualization Management (1Ch): Supported 00:08:32.114 Doorbell Buffer Config (7Ch): Supported 00:08:32.114 Format NVM (80h): Supported LBA-Change 00:08:32.114 I/O Commands 00:08:32.114 ------------ 00:08:32.114 Flush (00h): Supported LBA-Change 00:08:32.114 Write (01h): Supported LBA-Change 00:08:32.114 Read (02h): Supported 00:08:32.114 Compare (05h): Supported 00:08:32.114 Write Zeroes (08h): Supported LBA-Change 00:08:32.114 Dataset Management (09h): Supported LBA-Change 00:08:32.114 Unknown (0Ch): Supported 00:08:32.114 Unknown (12h): Supported 00:08:32.114 Copy (19h): Supported LBA-Change 00:08:32.114 Unknown (1Dh): Supported LBA-Change 00:08:32.114 00:08:32.114 Error Log 00:08:32.114 ========= 00:08:32.114 00:08:32.114 Arbitration 00:08:32.114 =========== 00:08:32.114 Arbitration Burst: no limit 00:08:32.114 00:08:32.114 Power Management 00:08:32.114 ================ 00:08:32.114 Number of Power States: 1 00:08:32.114 Current Power State: Power State #0 00:08:32.114 Power State #0: 00:08:32.114 Max Power: 25.00 W 00:08:32.114 Non-Operational State: Operational 00:08:32.114 Entry Latency: 16 microseconds 00:08:32.114 Exit Latency: 4 microseconds 00:08:32.114 Relative Read Throughput: 0 00:08:32.114 Relative Read Latency: 0 00:08:32.114 Relative Write Throughput: 0 00:08:32.114 Relative Write Latency: 0 00:08:32.114 Idle Power: Not Reported 00:08:32.114 Active Power: Not Reported 00:08:32.114 Non-Operational Permissive Mode: Not Supported 00:08:32.114 00:08:32.114 Health Information 00:08:32.114 ================== 00:08:32.114 Critical Warnings: 00:08:32.114 Available Spare Space: OK 00:08:32.114 Temperature: OK 00:08:32.114 Device Reliability: OK 00:08:32.114 Read Only: No 00:08:32.114 Volatile Memory Backup: OK 00:08:32.114 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.114 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.114 Available Spare: 0% 00:08:32.114 Available Spare Threshold: 0% 00:08:32.114 Life Percentage Used: 0% 00:08:32.114 Data Units Read: 721 00:08:32.114 Data Units Written: 650 00:08:32.114 Host Read Commands: 33249 00:08:32.114 Host Write Commands: 32672 00:08:32.114 Controller Busy Time: 0 minutes 00:08:32.114 Power Cycles: 0 00:08:32.114 Power On Hours: 0 hours 00:08:32.114 Unsafe Shutdowns: 0 00:08:32.114 Unrecoverable Media Errors: 0 00:08:32.115 Lifetime Error Log Entries: 0 00:08:32.115 Warning Temperature Time: 0 minutes 00:08:32.115 Critical Temperature Time: 0 minutes 00:08:32.115 00:08:32.115 Number of Queues 00:08:32.115 ================ 00:08:32.115 Number of I/O Submission Queues: 64 00:08:32.115 Number of I/O Completion Queues: 64 00:08:32.115 00:08:32.115 ZNS Specific Controller Data 00:08:32.115 ============================ 00:08:32.115 Zone Append Size Limit: 0 00:08:32.115 00:08:32.115 00:08:32.115 Active Namespaces 00:08:32.115 ================= 00:08:32.115 Namespace ID:1 00:08:32.115 Error Recovery Timeout: Unlimited 00:08:32.115 Command Set Identifier: NVM (00h) 00:08:32.115 Deallocate: Supported 00:08:32.115 Deallocated/Unwritten Error: Supported 00:08:32.115 Deallocated Read Value: All 0x00 00:08:32.115 Deallocate in Write Zeroes: Not Supported 00:08:32.115 Deallocated Guard Field: 0xFFFF 00:08:32.115 Flush: Supported 00:08:32.115 Reservation: Not Supported 00:08:32.115 Namespace Sharing Capabilities: Multiple Controllers 00:08:32.115 Size (in LBAs): 262144 (1GiB) 00:08:32.115 Capacity (in LBAs): 262144 (1GiB) 00:08:32.115 Utilization (in LBAs): 262144 (1GiB) 00:08:32.115 Thin Provisioning: Not Supported 00:08:32.115 Per-NS Atomic Units: No 00:08:32.115 Maximum Single Source Range Length: 128 00:08:32.115 Maximum Copy Length: 128 00:08:32.115 Maximum Source Range Count: 128 00:08:32.115 NGUID/EUI64 Never Reused: No 00:08:32.115 Namespace Write Protected: No 00:08:32.115 Endurance group ID: 1 00:08:32.115 Number of LBA Formats: 8 00:08:32.115 Current LBA Format: LBA Format #04 00:08:32.115 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.115 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.115 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.115 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.115 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.115 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.115 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.115 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.115 00:08:32.115 Get Feature FDP: 00:08:32.115 ================ 00:08:32.115 Enabled: Yes 00:08:32.115 FDP configuration index: 0 00:08:32.115 00:08:32.115 FDP configurations log page 00:08:32.115 =========================== 00:08:32.115 Number of FDP configurations: 1 00:08:32.115 Version: 0 00:08:32.115 Size: 112 00:08:32.115 FDP Configuration Descriptor: 0 00:08:32.115 Descriptor Size: 96 00:08:32.115 Reclaim Group Identifier format: 2 00:08:32.115 FDP Volatile Write Cache: Not Present 00:08:32.115 FDP Configuration: Valid 00:08:32.115 Vendor Specific Size: 0 00:08:32.115 Number of Reclaim Groups: 2 00:08:32.115 Number of Recalim Unit Handles: 8 00:08:32.115 Max Placement Identifiers: 128 00:08:32.115 Number of Namespaces Suppprted: 256 00:08:32.115 Reclaim unit Nominal Size: 6000000 bytes 00:08:32.115 Estimated Reclaim Unit Time Limit: Not Reported 00:08:32.115 RUH Desc #000: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #001: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #002: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #003: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #004: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #005: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #006: RUH Type: Initially Isolated 00:08:32.115 RUH Desc #007: RUH Type: Initially Isolated 00:08:32.115 00:08:32.115 FDP reclaim unit handle usage log page 00:08:32.115 ====================================== 00:08:32.115 Number of Reclaim Unit Handles: 8 00:08:32.115 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:32.115 RUH Usage Desc #001: RUH Attributes: Unused 00:08:32.115 RUH Usage Desc #002: RUH Attributes: Unused 00:08:32.115 RUH Usage Desc #003: RUH Attributes: Unused 00:08:32.115 RUH Usage Desc #004: RUH Attributes: Unused 00:08:32.115 RUH Usage Desc #005: RUH Attributes: Unused 00:08:32.115 RUH Usage Desc #006: RUH Attributes: Unused 00:08:32.115 RUH Usage Desc #007: RUH Attributes: Unused 00:08:32.115 00:08:32.115 FDP statistics log page 00:08:32.115 ======================= 00:08:32.115 Host bytes with metadata written: 413769728 00:08:32.115 Medi[2024-12-08 05:55:54.979294] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75736 terminated unexpected 00:08:32.115 a bytes with metadata written: 413814784 00:08:32.115 Media bytes erased: 0 00:08:32.115 00:08:32.115 FDP events log page 00:08:32.115 =================== 00:08:32.115 Number of FDP events: 0 00:08:32.115 00:08:32.115 NVM Specific Namespace Data 00:08:32.115 =========================== 00:08:32.115 Logical Block Storage Tag Mask: 0 00:08:32.115 Protection Information Capabilities: 00:08:32.115 16b Guard Protection Information Storage Tag Support: No 00:08:32.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.115 Storage Tag Check Read Support: No 00:08:32.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.115 ===================================================== 00:08:32.115 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.115 ===================================================== 00:08:32.115 Controller Capabilities/Features 00:08:32.115 ================================ 00:08:32.115 Vendor ID: 1b36 00:08:32.115 Subsystem Vendor ID: 1af4 00:08:32.115 Serial Number: 12342 00:08:32.115 Model Number: QEMU NVMe Ctrl 00:08:32.115 Firmware Version: 8.0.0 00:08:32.115 Recommended Arb Burst: 6 00:08:32.115 IEEE OUI Identifier: 00 54 52 00:08:32.115 Multi-path I/O 00:08:32.115 May have multiple subsystem ports: No 00:08:32.115 May have multiple controllers: No 00:08:32.115 Associated with SR-IOV VF: No 00:08:32.115 Max Data Transfer Size: 524288 00:08:32.115 Max Number of Namespaces: 256 00:08:32.115 Max Number of I/O Queues: 64 00:08:32.115 NVMe Specification Version (VS): 1.4 00:08:32.115 NVMe Specification Version (Identify): 1.4 00:08:32.115 Maximum Queue Entries: 2048 00:08:32.115 Contiguous Queues Required: Yes 00:08:32.115 Arbitration Mechanisms Supported 00:08:32.115 Weighted Round Robin: Not Supported 00:08:32.115 Vendor Specific: Not Supported 00:08:32.115 Reset Timeout: 7500 ms 00:08:32.115 Doorbell Stride: 4 bytes 00:08:32.115 NVM Subsystem Reset: Not Supported 00:08:32.115 Command Sets Supported 00:08:32.115 NVM Command Set: Supported 00:08:32.115 Boot Partition: Not Supported 00:08:32.115 Memory Page Size Minimum: 4096 bytes 00:08:32.115 Memory Page Size Maximum: 65536 bytes 00:08:32.115 Persistent Memory Region: Not Supported 00:08:32.115 Optional Asynchronous Events Supported 00:08:32.115 Namespace Attribute Notices: Supported 00:08:32.115 Firmware Activation Notices: Not Supported 00:08:32.115 ANA Change Notices: Not Supported 00:08:32.115 PLE Aggregate Log Change Notices: Not Supported 00:08:32.115 LBA Status Info Alert Notices: Not Supported 00:08:32.115 EGE Aggregate Log Change Notices: Not Supported 00:08:32.115 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.115 Zone Descriptor Change Notices: Not Supported 00:08:32.115 Discovery Log Change Notices: Not Supported 00:08:32.115 Controller Attributes 00:08:32.115 128-bit Host Identifier: Not Supported 00:08:32.115 Non-Operational Permissive Mode: Not Supported 00:08:32.115 NVM Sets: Not Supported 00:08:32.115 Read Recovery Levels: Not Supported 00:08:32.115 Endurance Groups: Not Supported 00:08:32.115 Predictable Latency Mode: Not Supported 00:08:32.115 Traffic Based Keep ALive: Not Supported 00:08:32.115 Namespace Granularity: Not Supported 00:08:32.115 SQ Associations: Not Supported 00:08:32.115 UUID List: Not Supported 00:08:32.115 Multi-Domain Subsystem: Not Supported 00:08:32.115 Fixed Capacity Management: Not Supported 00:08:32.115 Variable Capacity Management: Not Supported 00:08:32.115 Delete Endurance Group: Not Supported 00:08:32.115 Delete NVM Set: Not Supported 00:08:32.115 Extended LBA Formats Supported: Supported 00:08:32.115 Flexible Data Placement Supported: Not Supported 00:08:32.115 00:08:32.115 Controller Memory Buffer Support 00:08:32.115 ================================ 00:08:32.115 Supported: No 00:08:32.115 00:08:32.115 Persistent Memory Region Support 00:08:32.115 ================================ 00:08:32.115 Supported: No 00:08:32.115 00:08:32.115 Admin Command Set Attributes 00:08:32.115 ============================ 00:08:32.115 Security Send/Receive: Not Supported 00:08:32.115 Format NVM: Supported 00:08:32.115 Firmware Activate/Download: Not Supported 00:08:32.115 Namespace Management: Supported 00:08:32.115 Device Self-Test: Not Supported 00:08:32.115 Directives: Supported 00:08:32.115 NVMe-MI: Not Supported 00:08:32.115 Virtualization Management: Not Supported 00:08:32.115 Doorbell Buffer Config: Supported 00:08:32.115 Get LBA Status Capability: Not Supported 00:08:32.115 Command & Feature Lockdown Capability: Not Supported 00:08:32.115 Abort Command Limit: 4 00:08:32.115 Async Event Request Limit: 4 00:08:32.115 Number of Firmware Slots: N/A 00:08:32.115 Firmware Slot 1 Read-Only: N/A 00:08:32.115 Firmware Activation Without Reset: N/A 00:08:32.115 Multiple Update Detection Support: N/A 00:08:32.115 Firmware Update Granularity: No Information Provided 00:08:32.115 Per-Namespace SMART Log: Yes 00:08:32.115 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.115 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:32.115 Command Effects Log Page: Supported 00:08:32.115 Get Log Page Extended Data: Supported 00:08:32.115 Telemetry Log Pages: Not Supported 00:08:32.115 Persistent Event Log Pages: Not Supported 00:08:32.115 Supported Log Pages Log Page: May Support 00:08:32.115 Commands Supported & Effects Log Page: Not Supported 00:08:32.115 Feature Identifiers & Effects Log Page:May Support 00:08:32.115 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.115 Data Area 4 for Telemetry Log: Not Supported 00:08:32.115 Error Log Page Entries Supported: 1 00:08:32.115 Keep Alive: Not Supported 00:08:32.115 00:08:32.115 NVM Command Set Attributes 00:08:32.115 ========================== 00:08:32.115 Submission Queue Entry Size 00:08:32.115 Max: 64 00:08:32.115 Min: 64 00:08:32.115 Completion Queue Entry Size 00:08:32.115 Max: 16 00:08:32.115 Min: 16 00:08:32.115 Number of Namespaces: 256 00:08:32.115 Compare Command: Supported 00:08:32.115 Write Uncorrectable Command: Not Supported 00:08:32.115 Dataset Management Command: Supported 00:08:32.115 Write Zeroes Command: Supported 00:08:32.115 Set Features Save Field: Supported 00:08:32.115 Reservations: Not Supported 00:08:32.115 Timestamp: Supported 00:08:32.115 Copy: Supported 00:08:32.115 Volatile Write Cache: Present 00:08:32.115 Atomic Write Unit (Normal): 1 00:08:32.115 Atomic Write Unit (PFail): 1 00:08:32.115 Atomic Compare & Write Unit: 1 00:08:32.115 Fused Compare & Write: Not Supported 00:08:32.115 Scatter-Gather List 00:08:32.115 SGL Command Set: Supported 00:08:32.115 SGL Keyed: Not Supported 00:08:32.115 SGL Bit Bucket Descriptor: Not Supported 00:08:32.115 SGL Metadata Pointer: Not Supported 00:08:32.115 Oversized SGL: Not Supported 00:08:32.115 SGL Metadata Address: Not Supported 00:08:32.115 SGL Offset: Not Supported 00:08:32.115 Transport SGL Data Block: Not Supported 00:08:32.115 Replay Protected Memory Block: Not Supported 00:08:32.115 00:08:32.115 Firmware Slot Information 00:08:32.115 ========================= 00:08:32.115 Active slot: 1 00:08:32.115 Slot 1 Firmware Revision: 1.0 00:08:32.115 00:08:32.115 00:08:32.115 Commands Supported and Effects 00:08:32.115 ============================== 00:08:32.115 Admin Commands 00:08:32.115 -------------- 00:08:32.115 Delete I/O Submission Queue (00h): Supported 00:08:32.115 Create I/O Submission Queue (01h): Supported 00:08:32.115 Get Log Page (02h): Supported 00:08:32.115 Delete I/O Completion Queue (04h): Supported 00:08:32.115 Create I/O Completion Queue (05h): Supported 00:08:32.115 Identify (06h): Supported 00:08:32.115 Abort (08h): Supported 00:08:32.115 Set Features (09h): Supported 00:08:32.115 Get Features (0Ah): Supported 00:08:32.115 Asynchronous Event Request (0Ch): Supported 00:08:32.115 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.115 Directive Send (19h): Supported 00:08:32.115 Directive Receive (1Ah): Supported 00:08:32.115 Virtualization Management (1Ch): Supported 00:08:32.115 Doorbell Buffer Config (7Ch): Supported 00:08:32.115 Format NVM (80h): Supported LBA-Change 00:08:32.115 I/O Commands 00:08:32.115 ------------ 00:08:32.115 Flush (00h): Supported LBA-Change 00:08:32.115 Write (01h): Supported LBA-Change 00:08:32.115 Read (02h): Supported 00:08:32.115 Compare (05h): Supported 00:08:32.115 Write Zeroes (08h): Supported LBA-Change 00:08:32.115 Dataset Management (09h): Supported LBA-Change 00:08:32.115 Unknown (0Ch): Supported 00:08:32.115 Unknown (12h): Supported 00:08:32.115 Copy (19h): Supported LBA-Change 00:08:32.115 Unknown (1Dh): Supported LBA-Change 00:08:32.115 00:08:32.115 Error Log 00:08:32.115 ========= 00:08:32.115 00:08:32.115 Arbitration 00:08:32.115 =========== 00:08:32.115 Arbitration Burst: no limit 00:08:32.115 00:08:32.115 Power Management 00:08:32.115 ================ 00:08:32.115 Number of Power States: 1 00:08:32.115 Current Power State: Power State #0 00:08:32.115 Power State #0: 00:08:32.115 Max Power: 25.00 W 00:08:32.115 Non-Operational State: Operational 00:08:32.115 Entry Latency: 16 microseconds 00:08:32.115 Exit Latency: 4 microseconds 00:08:32.115 Relative Read Throughput: 0 00:08:32.115 Relative Read Latency: 0 00:08:32.115 Relative Write Throughput: 0 00:08:32.115 Relative Write Latency: 0 00:08:32.115 Idle Power: Not Reported 00:08:32.115 Active Power: Not Reported 00:08:32.115 Non-Operational Permissive Mode: Not Supported 00:08:32.115 00:08:32.115 Health Information 00:08:32.115 ================== 00:08:32.116 Critical Warnings: 00:08:32.116 Available Spare Space: OK 00:08:32.116 Temperature: OK 00:08:32.116 Device Reliability: OK 00:08:32.116 Read Only: No 00:08:32.116 Volatile Memory Backup: OK 00:08:32.116 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.116 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.116 Available Spare: 0% 00:08:32.116 Available Spare Threshold: 0% 00:08:32.116 Life Percentage Used: 0% 00:08:32.116 Data Units Read: 2069 00:08:32.116 Data Units Written: 1856 00:08:32.116 Host Read Commands: 98723 00:08:32.116 Host Write Commands: 96992 00:08:32.116 Controller Busy Time: 0 minutes 00:08:32.116 Power Cycles: 0 00:08:32.116 Power On Hours: 0 hours 00:08:32.116 Unsafe Shutdowns: 0 00:08:32.116 Unrecoverable Media Errors: 0 00:08:32.116 Lifetime Error Log Entries: 0 00:08:32.116 Warning Temperature Time: 0 minutes 00:08:32.116 Critical Temperature Time: 0 minutes 00:08:32.116 00:08:32.116 Number of Queues 00:08:32.116 ================ 00:08:32.116 Number of I/O Submission Queues: 64 00:08:32.116 Number of I/O Completion Queues: 64 00:08:32.116 00:08:32.116 ZNS Specific Controller Data 00:08:32.116 ============================ 00:08:32.116 Zone Append Size Limit: 0 00:08:32.116 00:08:32.116 00:08:32.116 Active Namespaces 00:08:32.116 ================= 00:08:32.116 Namespace ID:1 00:08:32.116 Error Recovery Timeout: Unlimited 00:08:32.116 Command Set Identifier: NVM (00h) 00:08:32.116 Deallocate: Supported 00:08:32.116 Deallocated/Unwritten Error: Supported 00:08:32.116 Deallocated Read Value: All 0x00 00:08:32.116 Deallocate in Write Zeroes: Not Supported 00:08:32.116 Deallocated Guard Field: 0xFFFF 00:08:32.116 Flush: Supported 00:08:32.116 Reservation: Not Supported 00:08:32.116 Namespace Sharing Capabilities: Private 00:08:32.116 Size (in LBAs): 1048576 (4GiB) 00:08:32.116 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.116 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.116 Thin Provisioning: Not Supported 00:08:32.116 Per-NS Atomic Units: No 00:08:32.116 Maximum Single Source Range Length: 128 00:08:32.116 Maximum Copy Length: 128 00:08:32.116 Maximum Source Range Count: 128 00:08:32.116 NGUID/EUI64 Never Reused: No 00:08:32.116 Namespace Write Protected: No 00:08:32.116 Number of LBA Formats: 8 00:08:32.116 Current LBA Format: LBA Format #04 00:08:32.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.116 00:08:32.116 NVM Specific Namespace Data 00:08:32.116 =========================== 00:08:32.116 Logical Block Storage Tag Mask: 0 00:08:32.116 Protection Information Capabilities: 00:08:32.116 16b Guard Protection Information Storage Tag Support: No 00:08:32.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.116 Storage Tag Check Read Support: No 00:08:32.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Namespace ID:2 00:08:32.116 Error Recovery Timeout: Unlimited 00:08:32.116 Command Set Identifier: NVM (00h) 00:08:32.116 Deallocate: Supported 00:08:32.116 Deallocated/Unwritten Error: Supported 00:08:32.116 Deallocated Read Value: All 0x00 00:08:32.116 Deallocate in Write Zeroes: Not Supported 00:08:32.116 Deallocated Guard Field: 0xFFFF 00:08:32.116 Flush: Supported 00:08:32.116 Reservation: Not Supported 00:08:32.116 Namespace Sharing Capabilities: Private 00:08:32.116 Size (in LBAs): 1048576 (4GiB) 00:08:32.116 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.116 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.116 Thin Provisioning: Not Supported 00:08:32.116 Per-NS Atomic Units: No 00:08:32.116 Maximum Single Source Range Length: 128 00:08:32.116 Maximum Copy Length: 128 00:08:32.116 Maximum Source Range Count: 128 00:08:32.116 NGUID/EUI64 Never Reused: No 00:08:32.116 Namespace Write Protected: No 00:08:32.116 Number of LBA Formats: 8 00:08:32.116 Current LBA Format: LBA Format #04 00:08:32.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.116 00:08:32.116 NVM Specific Namespace Data 00:08:32.116 =========================== 00:08:32.116 Logical Block Storage Tag Mask: 0 00:08:32.116 Protection Information Capabilities: 00:08:32.116 16b Guard Protection Information Storage Tag Support: No 00:08:32.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.116 Storage Tag Check Read Support: No 00:08:32.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Namespace ID:3 00:08:32.116 Error Recovery Timeout: Unlimited 00:08:32.116 Command Set Identifier: NVM (00h) 00:08:32.116 Deallocate: Supported 00:08:32.116 Deallocated/Unwritten Error: Supported 00:08:32.116 Deallocated Read Value: All 0x00 00:08:32.116 Deallocate in Write Zeroes: Not Supported 00:08:32.116 Deallocated Guard Field: 0xFFFF 00:08:32.116 Flush: Supported 00:08:32.116 Reservation: Not Supported 00:08:32.116 Namespace Sharing Capabilities: Private 00:08:32.116 Size (in LBAs): 1048576 (4GiB) 00:08:32.116 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.116 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.116 Thin Provisioning: Not Supported 00:08:32.116 Per-NS Atomic Units: No 00:08:32.116 Maximum Single Source Range Length: 128 00:08:32.116 Maximum Copy Length: 128 00:08:32.116 Maximum Source Range Count: 128 00:08:32.116 NGUID/EUI64 Never Reused: No 00:08:32.116 Namespace Write Protected: No 00:08:32.116 Number of LBA Formats: 8 00:08:32.116 Current LBA Format: LBA Format #04 00:08:32.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.116 00:08:32.116 NVM Specific Namespace Data 00:08:32.116 =========================== 00:08:32.116 Logical Block Storage Tag Mask: 0 00:08:32.116 Protection Information Capabilities: 00:08:32.116 16b Guard Protection Information Storage Tag Support: No 00:08:32.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.116 Storage Tag Check Read Support: No 00:08:32.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.116 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:32.116 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:32.374 ===================================================== 00:08:32.374 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.374 ===================================================== 00:08:32.374 Controller Capabilities/Features 00:08:32.374 ================================ 00:08:32.374 Vendor ID: 1b36 00:08:32.374 Subsystem Vendor ID: 1af4 00:08:32.374 Serial Number: 12340 00:08:32.374 Model Number: QEMU NVMe Ctrl 00:08:32.374 Firmware Version: 8.0.0 00:08:32.374 Recommended Arb Burst: 6 00:08:32.374 IEEE OUI Identifier: 00 54 52 00:08:32.374 Multi-path I/O 00:08:32.374 May have multiple subsystem ports: No 00:08:32.374 May have multiple controllers: No 00:08:32.374 Associated with SR-IOV VF: No 00:08:32.374 Max Data Transfer Size: 524288 00:08:32.374 Max Number of Namespaces: 256 00:08:32.374 Max Number of I/O Queues: 64 00:08:32.374 NVMe Specification Version (VS): 1.4 00:08:32.374 NVMe Specification Version (Identify): 1.4 00:08:32.374 Maximum Queue Entries: 2048 00:08:32.374 Contiguous Queues Required: Yes 00:08:32.374 Arbitration Mechanisms Supported 00:08:32.374 Weighted Round Robin: Not Supported 00:08:32.374 Vendor Specific: Not Supported 00:08:32.374 Reset Timeout: 7500 ms 00:08:32.374 Doorbell Stride: 4 bytes 00:08:32.374 NVM Subsystem Reset: Not Supported 00:08:32.374 Command Sets Supported 00:08:32.374 NVM Command Set: Supported 00:08:32.374 Boot Partition: Not Supported 00:08:32.374 Memory Page Size Minimum: 4096 bytes 00:08:32.374 Memory Page Size Maximum: 65536 bytes 00:08:32.374 Persistent Memory Region: Not Supported 00:08:32.374 Optional Asynchronous Events Supported 00:08:32.374 Namespace Attribute Notices: Supported 00:08:32.374 Firmware Activation Notices: Not Supported 00:08:32.374 ANA Change Notices: Not Supported 00:08:32.374 PLE Aggregate Log Change Notices: Not Supported 00:08:32.374 LBA Status Info Alert Notices: Not Supported 00:08:32.374 EGE Aggregate Log Change Notices: Not Supported 00:08:32.374 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.374 Zone Descriptor Change Notices: Not Supported 00:08:32.374 Discovery Log Change Notices: Not Supported 00:08:32.374 Controller Attributes 00:08:32.374 128-bit Host Identifier: Not Supported 00:08:32.374 Non-Operational Permissive Mode: Not Supported 00:08:32.374 NVM Sets: Not Supported 00:08:32.374 Read Recovery Levels: Not Supported 00:08:32.374 Endurance Groups: Not Supported 00:08:32.374 Predictable Latency Mode: Not Supported 00:08:32.374 Traffic Based Keep ALive: Not Supported 00:08:32.374 Namespace Granularity: Not Supported 00:08:32.374 SQ Associations: Not Supported 00:08:32.374 UUID List: Not Supported 00:08:32.374 Multi-Domain Subsystem: Not Supported 00:08:32.374 Fixed Capacity Management: Not Supported 00:08:32.374 Variable Capacity Management: Not Supported 00:08:32.374 Delete Endurance Group: Not Supported 00:08:32.374 Delete NVM Set: Not Supported 00:08:32.374 Extended LBA Formats Supported: Supported 00:08:32.374 Flexible Data Placement Supported: Not Supported 00:08:32.374 00:08:32.374 Controller Memory Buffer Support 00:08:32.374 ================================ 00:08:32.374 Supported: No 00:08:32.374 00:08:32.374 Persistent Memory Region Support 00:08:32.374 ================================ 00:08:32.374 Supported: No 00:08:32.374 00:08:32.374 Admin Command Set Attributes 00:08:32.374 ============================ 00:08:32.374 Security Send/Receive: Not Supported 00:08:32.374 Format NVM: Supported 00:08:32.374 Firmware Activate/Download: Not Supported 00:08:32.374 Namespace Management: Supported 00:08:32.374 Device Self-Test: Not Supported 00:08:32.374 Directives: Supported 00:08:32.374 NVMe-MI: Not Supported 00:08:32.374 Virtualization Management: Not Supported 00:08:32.374 Doorbell Buffer Config: Supported 00:08:32.374 Get LBA Status Capability: Not Supported 00:08:32.374 Command & Feature Lockdown Capability: Not Supported 00:08:32.374 Abort Command Limit: 4 00:08:32.374 Async Event Request Limit: 4 00:08:32.374 Number of Firmware Slots: N/A 00:08:32.374 Firmware Slot 1 Read-Only: N/A 00:08:32.374 Firmware Activation Without Reset: N/A 00:08:32.374 Multiple Update Detection Support: N/A 00:08:32.374 Firmware Update Granularity: No Information Provided 00:08:32.374 Per-Namespace SMART Log: Yes 00:08:32.374 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.374 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:32.374 Command Effects Log Page: Supported 00:08:32.374 Get Log Page Extended Data: Supported 00:08:32.374 Telemetry Log Pages: Not Supported 00:08:32.374 Persistent Event Log Pages: Not Supported 00:08:32.374 Supported Log Pages Log Page: May Support 00:08:32.374 Commands Supported & Effects Log Page: Not Supported 00:08:32.374 Feature Identifiers & Effects Log Page:May Support 00:08:32.374 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.374 Data Area 4 for Telemetry Log: Not Supported 00:08:32.374 Error Log Page Entries Supported: 1 00:08:32.374 Keep Alive: Not Supported 00:08:32.374 00:08:32.374 NVM Command Set Attributes 00:08:32.374 ========================== 00:08:32.374 Submission Queue Entry Size 00:08:32.374 Max: 64 00:08:32.374 Min: 64 00:08:32.374 Completion Queue Entry Size 00:08:32.374 Max: 16 00:08:32.374 Min: 16 00:08:32.374 Number of Namespaces: 256 00:08:32.374 Compare Command: Supported 00:08:32.374 Write Uncorrectable Command: Not Supported 00:08:32.374 Dataset Management Command: Supported 00:08:32.374 Write Zeroes Command: Supported 00:08:32.374 Set Features Save Field: Supported 00:08:32.374 Reservations: Not Supported 00:08:32.374 Timestamp: Supported 00:08:32.374 Copy: Supported 00:08:32.374 Volatile Write Cache: Present 00:08:32.374 Atomic Write Unit (Normal): 1 00:08:32.374 Atomic Write Unit (PFail): 1 00:08:32.374 Atomic Compare & Write Unit: 1 00:08:32.374 Fused Compare & Write: Not Supported 00:08:32.374 Scatter-Gather List 00:08:32.374 SGL Command Set: Supported 00:08:32.374 SGL Keyed: Not Supported 00:08:32.374 SGL Bit Bucket Descriptor: Not Supported 00:08:32.374 SGL Metadata Pointer: Not Supported 00:08:32.374 Oversized SGL: Not Supported 00:08:32.374 SGL Metadata Address: Not Supported 00:08:32.374 SGL Offset: Not Supported 00:08:32.374 Transport SGL Data Block: Not Supported 00:08:32.374 Replay Protected Memory Block: Not Supported 00:08:32.374 00:08:32.374 Firmware Slot Information 00:08:32.374 ========================= 00:08:32.374 Active slot: 1 00:08:32.374 Slot 1 Firmware Revision: 1.0 00:08:32.374 00:08:32.374 00:08:32.374 Commands Supported and Effects 00:08:32.374 ============================== 00:08:32.374 Admin Commands 00:08:32.374 -------------- 00:08:32.374 Delete I/O Submission Queue (00h): Supported 00:08:32.374 Create I/O Submission Queue (01h): Supported 00:08:32.374 Get Log Page (02h): Supported 00:08:32.374 Delete I/O Completion Queue (04h): Supported 00:08:32.374 Create I/O Completion Queue (05h): Supported 00:08:32.374 Identify (06h): Supported 00:08:32.375 Abort (08h): Supported 00:08:32.375 Set Features (09h): Supported 00:08:32.375 Get Features (0Ah): Supported 00:08:32.375 Asynchronous Event Request (0Ch): Supported 00:08:32.375 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.375 Directive Send (19h): Supported 00:08:32.375 Directive Receive (1Ah): Supported 00:08:32.375 Virtualization Management (1Ch): Supported 00:08:32.375 Doorbell Buffer Config (7Ch): Supported 00:08:32.375 Format NVM (80h): Supported LBA-Change 00:08:32.375 I/O Commands 00:08:32.375 ------------ 00:08:32.375 Flush (00h): Supported LBA-Change 00:08:32.375 Write (01h): Supported LBA-Change 00:08:32.375 Read (02h): Supported 00:08:32.375 Compare (05h): Supported 00:08:32.375 Write Zeroes (08h): Supported LBA-Change 00:08:32.375 Dataset Management (09h): Supported LBA-Change 00:08:32.375 Unknown (0Ch): Supported 00:08:32.375 Unknown (12h): Supported 00:08:32.375 Copy (19h): Supported LBA-Change 00:08:32.375 Unknown (1Dh): Supported LBA-Change 00:08:32.375 00:08:32.375 Error Log 00:08:32.375 ========= 00:08:32.375 00:08:32.375 Arbitration 00:08:32.375 =========== 00:08:32.375 Arbitration Burst: no limit 00:08:32.375 00:08:32.375 Power Management 00:08:32.375 ================ 00:08:32.375 Number of Power States: 1 00:08:32.375 Current Power State: Power State #0 00:08:32.375 Power State #0: 00:08:32.375 Max Power: 25.00 W 00:08:32.375 Non-Operational State: Operational 00:08:32.375 Entry Latency: 16 microseconds 00:08:32.375 Exit Latency: 4 microseconds 00:08:32.375 Relative Read Throughput: 0 00:08:32.375 Relative Read Latency: 0 00:08:32.375 Relative Write Throughput: 0 00:08:32.375 Relative Write Latency: 0 00:08:32.375 Idle Power: Not Reported 00:08:32.375 Active Power: Not Reported 00:08:32.375 Non-Operational Permissive Mode: Not Supported 00:08:32.375 00:08:32.375 Health Information 00:08:32.375 ================== 00:08:32.375 Critical Warnings: 00:08:32.375 Available Spare Space: OK 00:08:32.375 Temperature: OK 00:08:32.375 Device Reliability: OK 00:08:32.375 Read Only: No 00:08:32.375 Volatile Memory Backup: OK 00:08:32.375 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.375 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.375 Available Spare: 0% 00:08:32.375 Available Spare Threshold: 0% 00:08:32.375 Life Percentage Used: 0% 00:08:32.375 Data Units Read: 658 00:08:32.375 Data Units Written: 587 00:08:32.375 Host Read Commands: 32403 00:08:32.375 Host Write Commands: 32189 00:08:32.375 Controller Busy Time: 0 minutes 00:08:32.375 Power Cycles: 0 00:08:32.375 Power On Hours: 0 hours 00:08:32.375 Unsafe Shutdowns: 0 00:08:32.375 Unrecoverable Media Errors: 0 00:08:32.375 Lifetime Error Log Entries: 0 00:08:32.375 Warning Temperature Time: 0 minutes 00:08:32.375 Critical Temperature Time: 0 minutes 00:08:32.375 00:08:32.375 Number of Queues 00:08:32.375 ================ 00:08:32.375 Number of I/O Submission Queues: 64 00:08:32.375 Number of I/O Completion Queues: 64 00:08:32.375 00:08:32.375 ZNS Specific Controller Data 00:08:32.375 ============================ 00:08:32.375 Zone Append Size Limit: 0 00:08:32.375 00:08:32.375 00:08:32.375 Active Namespaces 00:08:32.375 ================= 00:08:32.375 Namespace ID:1 00:08:32.375 Error Recovery Timeout: Unlimited 00:08:32.375 Command Set Identifier: NVM (00h) 00:08:32.375 Deallocate: Supported 00:08:32.375 Deallocated/Unwritten Error: Supported 00:08:32.375 Deallocated Read Value: All 0x00 00:08:32.375 Deallocate in Write Zeroes: Not Supported 00:08:32.375 Deallocated Guard Field: 0xFFFF 00:08:32.375 Flush: Supported 00:08:32.375 Reservation: Not Supported 00:08:32.375 Metadata Transferred as: Separate Metadata Buffer 00:08:32.375 Namespace Sharing Capabilities: Private 00:08:32.375 Size (in LBAs): 1548666 (5GiB) 00:08:32.375 Capacity (in LBAs): 1548666 (5GiB) 00:08:32.375 Utilization (in LBAs): 1548666 (5GiB) 00:08:32.375 Thin Provisioning: Not Supported 00:08:32.375 Per-NS Atomic Units: No 00:08:32.375 Maximum Single Source Range Length: 128 00:08:32.375 Maximum Copy Length: 128 00:08:32.375 Maximum Source Range Count: 128 00:08:32.375 NGUID/EUI64 Never Reused: No 00:08:32.375 Namespace Write Protected: No 00:08:32.375 Number of LBA Formats: 8 00:08:32.375 Current LBA Format: LBA Format #07 00:08:32.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.375 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.375 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.375 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.375 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.375 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.375 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.375 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.375 00:08:32.375 NVM Specific Namespace Data 00:08:32.375 =========================== 00:08:32.375 Logical Block Storage Tag Mask: 0 00:08:32.375 Protection Information Capabilities: 00:08:32.375 16b Guard Protection Information Storage Tag Support: No 00:08:32.375 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.375 Storage Tag Check Read Support: No 00:08:32.375 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.375 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:32.375 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:32.634 ===================================================== 00:08:32.634 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.634 ===================================================== 00:08:32.634 Controller Capabilities/Features 00:08:32.634 ================================ 00:08:32.634 Vendor ID: 1b36 00:08:32.634 Subsystem Vendor ID: 1af4 00:08:32.634 Serial Number: 12341 00:08:32.634 Model Number: QEMU NVMe Ctrl 00:08:32.634 Firmware Version: 8.0.0 00:08:32.634 Recommended Arb Burst: 6 00:08:32.634 IEEE OUI Identifier: 00 54 52 00:08:32.634 Multi-path I/O 00:08:32.634 May have multiple subsystem ports: No 00:08:32.634 May have multiple controllers: No 00:08:32.634 Associated with SR-IOV VF: No 00:08:32.634 Max Data Transfer Size: 524288 00:08:32.634 Max Number of Namespaces: 256 00:08:32.634 Max Number of I/O Queues: 64 00:08:32.634 NVMe Specification Version (VS): 1.4 00:08:32.634 NVMe Specification Version (Identify): 1.4 00:08:32.634 Maximum Queue Entries: 2048 00:08:32.634 Contiguous Queues Required: Yes 00:08:32.634 Arbitration Mechanisms Supported 00:08:32.634 Weighted Round Robin: Not Supported 00:08:32.634 Vendor Specific: Not Supported 00:08:32.634 Reset Timeout: 7500 ms 00:08:32.634 Doorbell Stride: 4 bytes 00:08:32.634 NVM Subsystem Reset: Not Supported 00:08:32.634 Command Sets Supported 00:08:32.634 NVM Command Set: Supported 00:08:32.634 Boot Partition: Not Supported 00:08:32.634 Memory Page Size Minimum: 4096 bytes 00:08:32.634 Memory Page Size Maximum: 65536 bytes 00:08:32.634 Persistent Memory Region: Not Supported 00:08:32.634 Optional Asynchronous Events Supported 00:08:32.634 Namespace Attribute Notices: Supported 00:08:32.634 Firmware Activation Notices: Not Supported 00:08:32.634 ANA Change Notices: Not Supported 00:08:32.634 PLE Aggregate Log Change Notices: Not Supported 00:08:32.634 LBA Status Info Alert Notices: Not Supported 00:08:32.634 EGE Aggregate Log Change Notices: Not Supported 00:08:32.634 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.634 Zone Descriptor Change Notices: Not Supported 00:08:32.634 Discovery Log Change Notices: Not Supported 00:08:32.634 Controller Attributes 00:08:32.634 128-bit Host Identifier: Not Supported 00:08:32.634 Non-Operational Permissive Mode: Not Supported 00:08:32.634 NVM Sets: Not Supported 00:08:32.634 Read Recovery Levels: Not Supported 00:08:32.634 Endurance Groups: Not Supported 00:08:32.634 Predictable Latency Mode: Not Supported 00:08:32.634 Traffic Based Keep ALive: Not Supported 00:08:32.634 Namespace Granularity: Not Supported 00:08:32.634 SQ Associations: Not Supported 00:08:32.634 UUID List: Not Supported 00:08:32.634 Multi-Domain Subsystem: Not Supported 00:08:32.634 Fixed Capacity Management: Not Supported 00:08:32.634 Variable Capacity Management: Not Supported 00:08:32.634 Delete Endurance Group: Not Supported 00:08:32.634 Delete NVM Set: Not Supported 00:08:32.634 Extended LBA Formats Supported: Supported 00:08:32.634 Flexible Data Placement Supported: Not Supported 00:08:32.634 00:08:32.634 Controller Memory Buffer Support 00:08:32.634 ================================ 00:08:32.634 Supported: No 00:08:32.634 00:08:32.634 Persistent Memory Region Support 00:08:32.634 ================================ 00:08:32.634 Supported: No 00:08:32.634 00:08:32.634 Admin Command Set Attributes 00:08:32.634 ============================ 00:08:32.634 Security Send/Receive: Not Supported 00:08:32.634 Format NVM: Supported 00:08:32.634 Firmware Activate/Download: Not Supported 00:08:32.634 Namespace Management: Supported 00:08:32.634 Device Self-Test: Not Supported 00:08:32.634 Directives: Supported 00:08:32.634 NVMe-MI: Not Supported 00:08:32.634 Virtualization Management: Not Supported 00:08:32.634 Doorbell Buffer Config: Supported 00:08:32.634 Get LBA Status Capability: Not Supported 00:08:32.634 Command & Feature Lockdown Capability: Not Supported 00:08:32.634 Abort Command Limit: 4 00:08:32.634 Async Event Request Limit: 4 00:08:32.634 Number of Firmware Slots: N/A 00:08:32.634 Firmware Slot 1 Read-Only: N/A 00:08:32.634 Firmware Activation Without Reset: N/A 00:08:32.634 Multiple Update Detection Support: N/A 00:08:32.634 Firmware Update Granularity: No Information Provided 00:08:32.634 Per-Namespace SMART Log: Yes 00:08:32.634 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.634 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:32.634 Command Effects Log Page: Supported 00:08:32.634 Get Log Page Extended Data: Supported 00:08:32.634 Telemetry Log Pages: Not Supported 00:08:32.634 Persistent Event Log Pages: Not Supported 00:08:32.634 Supported Log Pages Log Page: May Support 00:08:32.634 Commands Supported & Effects Log Page: Not Supported 00:08:32.634 Feature Identifiers & Effects Log Page:May Support 00:08:32.634 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.634 Data Area 4 for Telemetry Log: Not Supported 00:08:32.634 Error Log Page Entries Supported: 1 00:08:32.634 Keep Alive: Not Supported 00:08:32.634 00:08:32.634 NVM Command Set Attributes 00:08:32.634 ========================== 00:08:32.634 Submission Queue Entry Size 00:08:32.634 Max: 64 00:08:32.634 Min: 64 00:08:32.634 Completion Queue Entry Size 00:08:32.634 Max: 16 00:08:32.634 Min: 16 00:08:32.634 Number of Namespaces: 256 00:08:32.634 Compare Command: Supported 00:08:32.634 Write Uncorrectable Command: Not Supported 00:08:32.634 Dataset Management Command: Supported 00:08:32.634 Write Zeroes Command: Supported 00:08:32.634 Set Features Save Field: Supported 00:08:32.634 Reservations: Not Supported 00:08:32.634 Timestamp: Supported 00:08:32.634 Copy: Supported 00:08:32.634 Volatile Write Cache: Present 00:08:32.634 Atomic Write Unit (Normal): 1 00:08:32.634 Atomic Write Unit (PFail): 1 00:08:32.634 Atomic Compare & Write Unit: 1 00:08:32.634 Fused Compare & Write: Not Supported 00:08:32.634 Scatter-Gather List 00:08:32.634 SGL Command Set: Supported 00:08:32.634 SGL Keyed: Not Supported 00:08:32.634 SGL Bit Bucket Descriptor: Not Supported 00:08:32.634 SGL Metadata Pointer: Not Supported 00:08:32.634 Oversized SGL: Not Supported 00:08:32.634 SGL Metadata Address: Not Supported 00:08:32.634 SGL Offset: Not Supported 00:08:32.634 Transport SGL Data Block: Not Supported 00:08:32.634 Replay Protected Memory Block: Not Supported 00:08:32.634 00:08:32.634 Firmware Slot Information 00:08:32.634 ========================= 00:08:32.634 Active slot: 1 00:08:32.634 Slot 1 Firmware Revision: 1.0 00:08:32.634 00:08:32.634 00:08:32.634 Commands Supported and Effects 00:08:32.634 ============================== 00:08:32.634 Admin Commands 00:08:32.634 -------------- 00:08:32.634 Delete I/O Submission Queue (00h): Supported 00:08:32.634 Create I/O Submission Queue (01h): Supported 00:08:32.634 Get Log Page (02h): Supported 00:08:32.634 Delete I/O Completion Queue (04h): Supported 00:08:32.634 Create I/O Completion Queue (05h): Supported 00:08:32.634 Identify (06h): Supported 00:08:32.634 Abort (08h): Supported 00:08:32.634 Set Features (09h): Supported 00:08:32.634 Get Features (0Ah): Supported 00:08:32.634 Asynchronous Event Request (0Ch): Supported 00:08:32.634 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.634 Directive Send (19h): Supported 00:08:32.634 Directive Receive (1Ah): Supported 00:08:32.634 Virtualization Management (1Ch): Supported 00:08:32.634 Doorbell Buffer Config (7Ch): Supported 00:08:32.634 Format NVM (80h): Supported LBA-Change 00:08:32.634 I/O Commands 00:08:32.634 ------------ 00:08:32.634 Flush (00h): Supported LBA-Change 00:08:32.634 Write (01h): Supported LBA-Change 00:08:32.634 Read (02h): Supported 00:08:32.634 Compare (05h): Supported 00:08:32.634 Write Zeroes (08h): Supported LBA-Change 00:08:32.634 Dataset Management (09h): Supported LBA-Change 00:08:32.634 Unknown (0Ch): Supported 00:08:32.634 Unknown (12h): Supported 00:08:32.634 Copy (19h): Supported LBA-Change 00:08:32.634 Unknown (1Dh): Supported LBA-Change 00:08:32.634 00:08:32.634 Error Log 00:08:32.634 ========= 00:08:32.635 00:08:32.635 Arbitration 00:08:32.635 =========== 00:08:32.635 Arbitration Burst: no limit 00:08:32.635 00:08:32.635 Power Management 00:08:32.635 ================ 00:08:32.635 Number of Power States: 1 00:08:32.635 Current Power State: Power State #0 00:08:32.635 Power State #0: 00:08:32.635 Max Power: 25.00 W 00:08:32.635 Non-Operational State: Operational 00:08:32.635 Entry Latency: 16 microseconds 00:08:32.635 Exit Latency: 4 microseconds 00:08:32.635 Relative Read Throughput: 0 00:08:32.635 Relative Read Latency: 0 00:08:32.635 Relative Write Throughput: 0 00:08:32.635 Relative Write Latency: 0 00:08:32.635 Idle Power: Not Reported 00:08:32.635 Active Power: Not Reported 00:08:32.635 Non-Operational Permissive Mode: Not Supported 00:08:32.635 00:08:32.635 Health Information 00:08:32.635 ================== 00:08:32.635 Critical Warnings: 00:08:32.635 Available Spare Space: OK 00:08:32.635 Temperature: OK 00:08:32.635 Device Reliability: OK 00:08:32.635 Read Only: No 00:08:32.635 Volatile Memory Backup: OK 00:08:32.635 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.635 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.635 Available Spare: 0% 00:08:32.635 Available Spare Threshold: 0% 00:08:32.635 Life Percentage Used: 0% 00:08:32.635 Data Units Read: 1005 00:08:32.635 Data Units Written: 866 00:08:32.635 Host Read Commands: 47945 00:08:32.635 Host Write Commands: 46614 00:08:32.635 Controller Busy Time: 0 minutes 00:08:32.635 Power Cycles: 0 00:08:32.635 Power On Hours: 0 hours 00:08:32.635 Unsafe Shutdowns: 0 00:08:32.635 Unrecoverable Media Errors: 0 00:08:32.635 Lifetime Error Log Entries: 0 00:08:32.635 Warning Temperature Time: 0 minutes 00:08:32.635 Critical Temperature Time: 0 minutes 00:08:32.635 00:08:32.635 Number of Queues 00:08:32.635 ================ 00:08:32.635 Number of I/O Submission Queues: 64 00:08:32.635 Number of I/O Completion Queues: 64 00:08:32.635 00:08:32.635 ZNS Specific Controller Data 00:08:32.635 ============================ 00:08:32.635 Zone Append Size Limit: 0 00:08:32.635 00:08:32.635 00:08:32.635 Active Namespaces 00:08:32.635 ================= 00:08:32.635 Namespace ID:1 00:08:32.635 Error Recovery Timeout: Unlimited 00:08:32.635 Command Set Identifier: NVM (00h) 00:08:32.635 Deallocate: Supported 00:08:32.635 Deallocated/Unwritten Error: Supported 00:08:32.635 Deallocated Read Value: All 0x00 00:08:32.635 Deallocate in Write Zeroes: Not Supported 00:08:32.635 Deallocated Guard Field: 0xFFFF 00:08:32.635 Flush: Supported 00:08:32.635 Reservation: Not Supported 00:08:32.635 Namespace Sharing Capabilities: Private 00:08:32.635 Size (in LBAs): 1310720 (5GiB) 00:08:32.635 Capacity (in LBAs): 1310720 (5GiB) 00:08:32.635 Utilization (in LBAs): 1310720 (5GiB) 00:08:32.635 Thin Provisioning: Not Supported 00:08:32.635 Per-NS Atomic Units: No 00:08:32.635 Maximum Single Source Range Length: 128 00:08:32.635 Maximum Copy Length: 128 00:08:32.635 Maximum Source Range Count: 128 00:08:32.635 NGUID/EUI64 Never Reused: No 00:08:32.635 Namespace Write Protected: No 00:08:32.635 Number of LBA Formats: 8 00:08:32.635 Current LBA Format: LBA Format #04 00:08:32.635 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.635 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.635 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.635 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.635 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.635 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.635 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.635 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.635 00:08:32.635 NVM Specific Namespace Data 00:08:32.635 =========================== 00:08:32.635 Logical Block Storage Tag Mask: 0 00:08:32.635 Protection Information Capabilities: 00:08:32.635 16b Guard Protection Information Storage Tag Support: No 00:08:32.635 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.635 Storage Tag Check Read Support: No 00:08:32.635 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.635 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:32.635 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:32.894 ===================================================== 00:08:32.894 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.894 ===================================================== 00:08:32.894 Controller Capabilities/Features 00:08:32.894 ================================ 00:08:32.894 Vendor ID: 1b36 00:08:32.894 Subsystem Vendor ID: 1af4 00:08:32.894 Serial Number: 12342 00:08:32.894 Model Number: QEMU NVMe Ctrl 00:08:32.894 Firmware Version: 8.0.0 00:08:32.894 Recommended Arb Burst: 6 00:08:32.894 IEEE OUI Identifier: 00 54 52 00:08:32.894 Multi-path I/O 00:08:32.894 May have multiple subsystem ports: No 00:08:32.894 May have multiple controllers: No 00:08:32.894 Associated with SR-IOV VF: No 00:08:32.894 Max Data Transfer Size: 524288 00:08:32.894 Max Number of Namespaces: 256 00:08:32.894 Max Number of I/O Queues: 64 00:08:32.894 NVMe Specification Version (VS): 1.4 00:08:32.894 NVMe Specification Version (Identify): 1.4 00:08:32.894 Maximum Queue Entries: 2048 00:08:32.894 Contiguous Queues Required: Yes 00:08:32.894 Arbitration Mechanisms Supported 00:08:32.894 Weighted Round Robin: Not Supported 00:08:32.894 Vendor Specific: Not Supported 00:08:32.894 Reset Timeout: 7500 ms 00:08:32.894 Doorbell Stride: 4 bytes 00:08:32.894 NVM Subsystem Reset: Not Supported 00:08:32.894 Command Sets Supported 00:08:32.894 NVM Command Set: Supported 00:08:32.894 Boot Partition: Not Supported 00:08:32.894 Memory Page Size Minimum: 4096 bytes 00:08:32.894 Memory Page Size Maximum: 65536 bytes 00:08:32.894 Persistent Memory Region: Not Supported 00:08:32.894 Optional Asynchronous Events Supported 00:08:32.894 Namespace Attribute Notices: Supported 00:08:32.894 Firmware Activation Notices: Not Supported 00:08:32.894 ANA Change Notices: Not Supported 00:08:32.894 PLE Aggregate Log Change Notices: Not Supported 00:08:32.894 LBA Status Info Alert Notices: Not Supported 00:08:32.894 EGE Aggregate Log Change Notices: Not Supported 00:08:32.894 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.894 Zone Descriptor Change Notices: Not Supported 00:08:32.894 Discovery Log Change Notices: Not Supported 00:08:32.894 Controller Attributes 00:08:32.894 128-bit Host Identifier: Not Supported 00:08:32.894 Non-Operational Permissive Mode: Not Supported 00:08:32.894 NVM Sets: Not Supported 00:08:32.894 Read Recovery Levels: Not Supported 00:08:32.894 Endurance Groups: Not Supported 00:08:32.894 Predictable Latency Mode: Not Supported 00:08:32.894 Traffic Based Keep ALive: Not Supported 00:08:32.894 Namespace Granularity: Not Supported 00:08:32.894 SQ Associations: Not Supported 00:08:32.894 UUID List: Not Supported 00:08:32.894 Multi-Domain Subsystem: Not Supported 00:08:32.894 Fixed Capacity Management: Not Supported 00:08:32.894 Variable Capacity Management: Not Supported 00:08:32.894 Delete Endurance Group: Not Supported 00:08:32.894 Delete NVM Set: Not Supported 00:08:32.894 Extended LBA Formats Supported: Supported 00:08:32.894 Flexible Data Placement Supported: Not Supported 00:08:32.894 00:08:32.894 Controller Memory Buffer Support 00:08:32.894 ================================ 00:08:32.894 Supported: No 00:08:32.894 00:08:32.894 Persistent Memory Region Support 00:08:32.894 ================================ 00:08:32.894 Supported: No 00:08:32.894 00:08:32.894 Admin Command Set Attributes 00:08:32.894 ============================ 00:08:32.894 Security Send/Receive: Not Supported 00:08:32.894 Format NVM: Supported 00:08:32.894 Firmware Activate/Download: Not Supported 00:08:32.894 Namespace Management: Supported 00:08:32.894 Device Self-Test: Not Supported 00:08:32.894 Directives: Supported 00:08:32.894 NVMe-MI: Not Supported 00:08:32.894 Virtualization Management: Not Supported 00:08:32.894 Doorbell Buffer Config: Supported 00:08:32.894 Get LBA Status Capability: Not Supported 00:08:32.894 Command & Feature Lockdown Capability: Not Supported 00:08:32.894 Abort Command Limit: 4 00:08:32.894 Async Event Request Limit: 4 00:08:32.894 Number of Firmware Slots: N/A 00:08:32.894 Firmware Slot 1 Read-Only: N/A 00:08:32.894 Firmware Activation Without Reset: N/A 00:08:32.894 Multiple Update Detection Support: N/A 00:08:32.894 Firmware Update Granularity: No Information Provided 00:08:32.894 Per-Namespace SMART Log: Yes 00:08:32.894 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.894 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:32.894 Command Effects Log Page: Supported 00:08:32.894 Get Log Page Extended Data: Supported 00:08:32.894 Telemetry Log Pages: Not Supported 00:08:32.894 Persistent Event Log Pages: Not Supported 00:08:32.894 Supported Log Pages Log Page: May Support 00:08:32.894 Commands Supported & Effects Log Page: Not Supported 00:08:32.894 Feature Identifiers & Effects Log Page:May Support 00:08:32.894 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.894 Data Area 4 for Telemetry Log: Not Supported 00:08:32.894 Error Log Page Entries Supported: 1 00:08:32.894 Keep Alive: Not Supported 00:08:32.894 00:08:32.894 NVM Command Set Attributes 00:08:32.894 ========================== 00:08:32.894 Submission Queue Entry Size 00:08:32.894 Max: 64 00:08:32.894 Min: 64 00:08:32.894 Completion Queue Entry Size 00:08:32.894 Max: 16 00:08:32.894 Min: 16 00:08:32.894 Number of Namespaces: 256 00:08:32.894 Compare Command: Supported 00:08:32.894 Write Uncorrectable Command: Not Supported 00:08:32.894 Dataset Management Command: Supported 00:08:32.894 Write Zeroes Command: Supported 00:08:32.894 Set Features Save Field: Supported 00:08:32.894 Reservations: Not Supported 00:08:32.894 Timestamp: Supported 00:08:32.894 Copy: Supported 00:08:32.894 Volatile Write Cache: Present 00:08:32.894 Atomic Write Unit (Normal): 1 00:08:32.894 Atomic Write Unit (PFail): 1 00:08:32.894 Atomic Compare & Write Unit: 1 00:08:32.894 Fused Compare & Write: Not Supported 00:08:32.894 Scatter-Gather List 00:08:32.894 SGL Command Set: Supported 00:08:32.894 SGL Keyed: Not Supported 00:08:32.894 SGL Bit Bucket Descriptor: Not Supported 00:08:32.894 SGL Metadata Pointer: Not Supported 00:08:32.894 Oversized SGL: Not Supported 00:08:32.894 SGL Metadata Address: Not Supported 00:08:32.894 SGL Offset: Not Supported 00:08:32.894 Transport SGL Data Block: Not Supported 00:08:32.894 Replay Protected Memory Block: Not Supported 00:08:32.894 00:08:32.894 Firmware Slot Information 00:08:32.894 ========================= 00:08:32.894 Active slot: 1 00:08:32.894 Slot 1 Firmware Revision: 1.0 00:08:32.894 00:08:32.894 00:08:32.894 Commands Supported and Effects 00:08:32.894 ============================== 00:08:32.894 Admin Commands 00:08:32.894 -------------- 00:08:32.894 Delete I/O Submission Queue (00h): Supported 00:08:32.894 Create I/O Submission Queue (01h): Supported 00:08:32.894 Get Log Page (02h): Supported 00:08:32.894 Delete I/O Completion Queue (04h): Supported 00:08:32.894 Create I/O Completion Queue (05h): Supported 00:08:32.894 Identify (06h): Supported 00:08:32.894 Abort (08h): Supported 00:08:32.894 Set Features (09h): Supported 00:08:32.894 Get Features (0Ah): Supported 00:08:32.894 Asynchronous Event Request (0Ch): Supported 00:08:32.894 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.894 Directive Send (19h): Supported 00:08:32.894 Directive Receive (1Ah): Supported 00:08:32.894 Virtualization Management (1Ch): Supported 00:08:32.894 Doorbell Buffer Config (7Ch): Supported 00:08:32.894 Format NVM (80h): Supported LBA-Change 00:08:32.894 I/O Commands 00:08:32.895 ------------ 00:08:32.895 Flush (00h): Supported LBA-Change 00:08:32.895 Write (01h): Supported LBA-Change 00:08:32.895 Read (02h): Supported 00:08:32.895 Compare (05h): Supported 00:08:32.895 Write Zeroes (08h): Supported LBA-Change 00:08:32.895 Dataset Management (09h): Supported LBA-Change 00:08:32.895 Unknown (0Ch): Supported 00:08:32.895 Unknown (12h): Supported 00:08:32.895 Copy (19h): Supported LBA-Change 00:08:32.895 Unknown (1Dh): Supported LBA-Change 00:08:32.895 00:08:32.895 Error Log 00:08:32.895 ========= 00:08:32.895 00:08:32.895 Arbitration 00:08:32.895 =========== 00:08:32.895 Arbitration Burst: no limit 00:08:32.895 00:08:32.895 Power Management 00:08:32.895 ================ 00:08:32.895 Number of Power States: 1 00:08:32.895 Current Power State: Power State #0 00:08:32.895 Power State #0: 00:08:32.895 Max Power: 25.00 W 00:08:32.895 Non-Operational State: Operational 00:08:32.895 Entry Latency: 16 microseconds 00:08:32.895 Exit Latency: 4 microseconds 00:08:32.895 Relative Read Throughput: 0 00:08:32.895 Relative Read Latency: 0 00:08:32.895 Relative Write Throughput: 0 00:08:32.895 Relative Write Latency: 0 00:08:32.895 Idle Power: Not Reported 00:08:32.895 Active Power: Not Reported 00:08:32.895 Non-Operational Permissive Mode: Not Supported 00:08:32.895 00:08:32.895 Health Information 00:08:32.895 ================== 00:08:32.895 Critical Warnings: 00:08:32.895 Available Spare Space: OK 00:08:32.895 Temperature: OK 00:08:32.895 Device Reliability: OK 00:08:32.895 Read Only: No 00:08:32.895 Volatile Memory Backup: OK 00:08:32.895 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.895 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.895 Available Spare: 0% 00:08:32.895 Available Spare Threshold: 0% 00:08:32.895 Life Percentage Used: 0% 00:08:32.895 Data Units Read: 2069 00:08:32.895 Data Units Written: 1856 00:08:32.895 Host Read Commands: 98723 00:08:32.895 Host Write Commands: 96992 00:08:32.895 Controller Busy Time: 0 minutes 00:08:32.895 Power Cycles: 0 00:08:32.895 Power On Hours: 0 hours 00:08:32.895 Unsafe Shutdowns: 0 00:08:32.895 Unrecoverable Media Errors: 0 00:08:32.895 Lifetime Error Log Entries: 0 00:08:32.895 Warning Temperature Time: 0 minutes 00:08:32.895 Critical Temperature Time: 0 minutes 00:08:32.895 00:08:32.895 Number of Queues 00:08:32.895 ================ 00:08:32.895 Number of I/O Submission Queues: 64 00:08:32.895 Number of I/O Completion Queues: 64 00:08:32.895 00:08:32.895 ZNS Specific Controller Data 00:08:32.895 ============================ 00:08:32.895 Zone Append Size Limit: 0 00:08:32.895 00:08:32.895 00:08:32.895 Active Namespaces 00:08:32.895 ================= 00:08:32.895 Namespace ID:1 00:08:32.895 Error Recovery Timeout: Unlimited 00:08:32.895 Command Set Identifier: NVM (00h) 00:08:32.895 Deallocate: Supported 00:08:32.895 Deallocated/Unwritten Error: Supported 00:08:32.895 Deallocated Read Value: All 0x00 00:08:32.895 Deallocate in Write Zeroes: Not Supported 00:08:32.895 Deallocated Guard Field: 0xFFFF 00:08:32.895 Flush: Supported 00:08:32.895 Reservation: Not Supported 00:08:32.895 Namespace Sharing Capabilities: Private 00:08:32.895 Size (in LBAs): 1048576 (4GiB) 00:08:32.895 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.895 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.895 Thin Provisioning: Not Supported 00:08:32.895 Per-NS Atomic Units: No 00:08:32.895 Maximum Single Source Range Length: 128 00:08:32.895 Maximum Copy Length: 128 00:08:32.895 Maximum Source Range Count: 128 00:08:32.895 NGUID/EUI64 Never Reused: No 00:08:32.895 Namespace Write Protected: No 00:08:32.895 Number of LBA Formats: 8 00:08:32.895 Current LBA Format: LBA Format #04 00:08:32.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.895 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.895 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.895 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.895 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.895 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.895 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.895 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.895 00:08:32.895 NVM Specific Namespace Data 00:08:32.895 =========================== 00:08:32.895 Logical Block Storage Tag Mask: 0 00:08:32.895 Protection Information Capabilities: 00:08:32.895 16b Guard Protection Information Storage Tag Support: No 00:08:32.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.895 Storage Tag Check Read Support: No 00:08:32.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Namespace ID:2 00:08:32.895 Error Recovery Timeout: Unlimited 00:08:32.895 Command Set Identifier: NVM (00h) 00:08:32.895 Deallocate: Supported 00:08:32.895 Deallocated/Unwritten Error: Supported 00:08:32.895 Deallocated Read Value: All 0x00 00:08:32.895 Deallocate in Write Zeroes: Not Supported 00:08:32.895 Deallocated Guard Field: 0xFFFF 00:08:32.895 Flush: Supported 00:08:32.895 Reservation: Not Supported 00:08:32.895 Namespace Sharing Capabilities: Private 00:08:32.895 Size (in LBAs): 1048576 (4GiB) 00:08:32.895 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.895 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.895 Thin Provisioning: Not Supported 00:08:32.895 Per-NS Atomic Units: No 00:08:32.895 Maximum Single Source Range Length: 128 00:08:32.895 Maximum Copy Length: 128 00:08:32.895 Maximum Source Range Count: 128 00:08:32.895 NGUID/EUI64 Never Reused: No 00:08:32.895 Namespace Write Protected: No 00:08:32.895 Number of LBA Formats: 8 00:08:32.895 Current LBA Format: LBA Format #04 00:08:32.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.895 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.895 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.895 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.895 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.895 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.895 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.895 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.895 00:08:32.895 NVM Specific Namespace Data 00:08:32.895 =========================== 00:08:32.895 Logical Block Storage Tag Mask: 0 00:08:32.895 Protection Information Capabilities: 00:08:32.895 16b Guard Protection Information Storage Tag Support: No 00:08:32.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.895 Storage Tag Check Read Support: No 00:08:32.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.895 Namespace ID:3 00:08:32.895 Error Recovery Timeout: Unlimited 00:08:32.895 Command Set Identifier: NVM (00h) 00:08:32.895 Deallocate: Supported 00:08:32.895 Deallocated/Unwritten Error: Supported 00:08:32.895 Deallocated Read Value: All 0x00 00:08:32.895 Deallocate in Write Zeroes: Not Supported 00:08:32.895 Deallocated Guard Field: 0xFFFF 00:08:32.895 Flush: Supported 00:08:32.895 Reservation: Not Supported 00:08:32.895 Namespace Sharing Capabilities: Private 00:08:32.895 Size (in LBAs): 1048576 (4GiB) 00:08:32.895 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.895 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.895 Thin Provisioning: Not Supported 00:08:32.895 Per-NS Atomic Units: No 00:08:32.895 Maximum Single Source Range Length: 128 00:08:32.895 Maximum Copy Length: 128 00:08:32.895 Maximum Source Range Count: 128 00:08:32.895 NGUID/EUI64 Never Reused: No 00:08:32.895 Namespace Write Protected: No 00:08:32.895 Number of LBA Formats: 8 00:08:32.895 Current LBA Format: LBA Format #04 00:08:32.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.896 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.896 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.896 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.896 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.896 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.896 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.896 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.896 00:08:32.896 NVM Specific Namespace Data 00:08:32.896 =========================== 00:08:32.896 Logical Block Storage Tag Mask: 0 00:08:32.896 Protection Information Capabilities: 00:08:32.896 16b Guard Protection Information Storage Tag Support: No 00:08:32.896 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:32.896 Storage Tag Check Read Support: No 00:08:32.896 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:32.896 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:32.896 05:55:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:33.154 ===================================================== 00:08:33.154 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.154 ===================================================== 00:08:33.154 Controller Capabilities/Features 00:08:33.154 ================================ 00:08:33.154 Vendor ID: 1b36 00:08:33.154 Subsystem Vendor ID: 1af4 00:08:33.154 Serial Number: 12343 00:08:33.154 Model Number: QEMU NVMe Ctrl 00:08:33.154 Firmware Version: 8.0.0 00:08:33.154 Recommended Arb Burst: 6 00:08:33.154 IEEE OUI Identifier: 00 54 52 00:08:33.154 Multi-path I/O 00:08:33.154 May have multiple subsystem ports: No 00:08:33.154 May have multiple controllers: Yes 00:08:33.154 Associated with SR-IOV VF: No 00:08:33.154 Max Data Transfer Size: 524288 00:08:33.154 Max Number of Namespaces: 256 00:08:33.154 Max Number of I/O Queues: 64 00:08:33.154 NVMe Specification Version (VS): 1.4 00:08:33.154 NVMe Specification Version (Identify): 1.4 00:08:33.154 Maximum Queue Entries: 2048 00:08:33.154 Contiguous Queues Required: Yes 00:08:33.154 Arbitration Mechanisms Supported 00:08:33.154 Weighted Round Robin: Not Supported 00:08:33.154 Vendor Specific: Not Supported 00:08:33.154 Reset Timeout: 7500 ms 00:08:33.154 Doorbell Stride: 4 bytes 00:08:33.154 NVM Subsystem Reset: Not Supported 00:08:33.154 Command Sets Supported 00:08:33.154 NVM Command Set: Supported 00:08:33.154 Boot Partition: Not Supported 00:08:33.154 Memory Page Size Minimum: 4096 bytes 00:08:33.154 Memory Page Size Maximum: 65536 bytes 00:08:33.154 Persistent Memory Region: Not Supported 00:08:33.154 Optional Asynchronous Events Supported 00:08:33.154 Namespace Attribute Notices: Supported 00:08:33.154 Firmware Activation Notices: Not Supported 00:08:33.154 ANA Change Notices: Not Supported 00:08:33.154 PLE Aggregate Log Change Notices: Not Supported 00:08:33.154 LBA Status Info Alert Notices: Not Supported 00:08:33.154 EGE Aggregate Log Change Notices: Not Supported 00:08:33.154 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.154 Zone Descriptor Change Notices: Not Supported 00:08:33.154 Discovery Log Change Notices: Not Supported 00:08:33.154 Controller Attributes 00:08:33.154 128-bit Host Identifier: Not Supported 00:08:33.154 Non-Operational Permissive Mode: Not Supported 00:08:33.154 NVM Sets: Not Supported 00:08:33.154 Read Recovery Levels: Not Supported 00:08:33.154 Endurance Groups: Supported 00:08:33.154 Predictable Latency Mode: Not Supported 00:08:33.154 Traffic Based Keep ALive: Not Supported 00:08:33.154 Namespace Granularity: Not Supported 00:08:33.154 SQ Associations: Not Supported 00:08:33.154 UUID List: Not Supported 00:08:33.154 Multi-Domain Subsystem: Not Supported 00:08:33.154 Fixed Capacity Management: Not Supported 00:08:33.154 Variable Capacity Management: Not Supported 00:08:33.154 Delete Endurance Group: Not Supported 00:08:33.154 Delete NVM Set: Not Supported 00:08:33.154 Extended LBA Formats Supported: Supported 00:08:33.154 Flexible Data Placement Supported: Supported 00:08:33.154 00:08:33.154 Controller Memory Buffer Support 00:08:33.154 ================================ 00:08:33.154 Supported: No 00:08:33.154 00:08:33.154 Persistent Memory Region Support 00:08:33.154 ================================ 00:08:33.154 Supported: No 00:08:33.154 00:08:33.154 Admin Command Set Attributes 00:08:33.154 ============================ 00:08:33.154 Security Send/Receive: Not Supported 00:08:33.154 Format NVM: Supported 00:08:33.154 Firmware Activate/Download: Not Supported 00:08:33.154 Namespace Management: Supported 00:08:33.154 Device Self-Test: Not Supported 00:08:33.154 Directives: Supported 00:08:33.154 NVMe-MI: Not Supported 00:08:33.154 Virtualization Management: Not Supported 00:08:33.154 Doorbell Buffer Config: Supported 00:08:33.154 Get LBA Status Capability: Not Supported 00:08:33.154 Command & Feature Lockdown Capability: Not Supported 00:08:33.154 Abort Command Limit: 4 00:08:33.154 Async Event Request Limit: 4 00:08:33.154 Number of Firmware Slots: N/A 00:08:33.154 Firmware Slot 1 Read-Only: N/A 00:08:33.154 Firmware Activation Without Reset: N/A 00:08:33.154 Multiple Update Detection Support: N/A 00:08:33.154 Firmware Update Granularity: No Information Provided 00:08:33.154 Per-Namespace SMART Log: Yes 00:08:33.154 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.154 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:33.154 Command Effects Log Page: Supported 00:08:33.154 Get Log Page Extended Data: Supported 00:08:33.154 Telemetry Log Pages: Not Supported 00:08:33.154 Persistent Event Log Pages: Not Supported 00:08:33.154 Supported Log Pages Log Page: May Support 00:08:33.154 Commands Supported & Effects Log Page: Not Supported 00:08:33.154 Feature Identifiers & Effects Log Page:May Support 00:08:33.154 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.154 Data Area 4 for Telemetry Log: Not Supported 00:08:33.154 Error Log Page Entries Supported: 1 00:08:33.154 Keep Alive: Not Supported 00:08:33.154 00:08:33.154 NVM Command Set Attributes 00:08:33.154 ========================== 00:08:33.154 Submission Queue Entry Size 00:08:33.154 Max: 64 00:08:33.154 Min: 64 00:08:33.154 Completion Queue Entry Size 00:08:33.154 Max: 16 00:08:33.154 Min: 16 00:08:33.154 Number of Namespaces: 256 00:08:33.154 Compare Command: Supported 00:08:33.154 Write Uncorrectable Command: Not Supported 00:08:33.154 Dataset Management Command: Supported 00:08:33.154 Write Zeroes Command: Supported 00:08:33.154 Set Features Save Field: Supported 00:08:33.154 Reservations: Not Supported 00:08:33.154 Timestamp: Supported 00:08:33.154 Copy: Supported 00:08:33.154 Volatile Write Cache: Present 00:08:33.154 Atomic Write Unit (Normal): 1 00:08:33.154 Atomic Write Unit (PFail): 1 00:08:33.154 Atomic Compare & Write Unit: 1 00:08:33.154 Fused Compare & Write: Not Supported 00:08:33.154 Scatter-Gather List 00:08:33.154 SGL Command Set: Supported 00:08:33.154 SGL Keyed: Not Supported 00:08:33.155 SGL Bit Bucket Descriptor: Not Supported 00:08:33.155 SGL Metadata Pointer: Not Supported 00:08:33.155 Oversized SGL: Not Supported 00:08:33.155 SGL Metadata Address: Not Supported 00:08:33.155 SGL Offset: Not Supported 00:08:33.155 Transport SGL Data Block: Not Supported 00:08:33.155 Replay Protected Memory Block: Not Supported 00:08:33.155 00:08:33.155 Firmware Slot Information 00:08:33.155 ========================= 00:08:33.155 Active slot: 1 00:08:33.155 Slot 1 Firmware Revision: 1.0 00:08:33.155 00:08:33.155 00:08:33.155 Commands Supported and Effects 00:08:33.155 ============================== 00:08:33.155 Admin Commands 00:08:33.155 -------------- 00:08:33.155 Delete I/O Submission Queue (00h): Supported 00:08:33.155 Create I/O Submission Queue (01h): Supported 00:08:33.155 Get Log Page (02h): Supported 00:08:33.155 Delete I/O Completion Queue (04h): Supported 00:08:33.155 Create I/O Completion Queue (05h): Supported 00:08:33.155 Identify (06h): Supported 00:08:33.155 Abort (08h): Supported 00:08:33.155 Set Features (09h): Supported 00:08:33.155 Get Features (0Ah): Supported 00:08:33.155 Asynchronous Event Request (0Ch): Supported 00:08:33.155 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.155 Directive Send (19h): Supported 00:08:33.155 Directive Receive (1Ah): Supported 00:08:33.155 Virtualization Management (1Ch): Supported 00:08:33.155 Doorbell Buffer Config (7Ch): Supported 00:08:33.155 Format NVM (80h): Supported LBA-Change 00:08:33.155 I/O Commands 00:08:33.155 ------------ 00:08:33.155 Flush (00h): Supported LBA-Change 00:08:33.155 Write (01h): Supported LBA-Change 00:08:33.155 Read (02h): Supported 00:08:33.155 Compare (05h): Supported 00:08:33.155 Write Zeroes (08h): Supported LBA-Change 00:08:33.155 Dataset Management (09h): Supported LBA-Change 00:08:33.155 Unknown (0Ch): Supported 00:08:33.155 Unknown (12h): Supported 00:08:33.155 Copy (19h): Supported LBA-Change 00:08:33.155 Unknown (1Dh): Supported LBA-Change 00:08:33.155 00:08:33.155 Error Log 00:08:33.155 ========= 00:08:33.155 00:08:33.155 Arbitration 00:08:33.155 =========== 00:08:33.155 Arbitration Burst: no limit 00:08:33.155 00:08:33.155 Power Management 00:08:33.155 ================ 00:08:33.155 Number of Power States: 1 00:08:33.155 Current Power State: Power State #0 00:08:33.155 Power State #0: 00:08:33.155 Max Power: 25.00 W 00:08:33.155 Non-Operational State: Operational 00:08:33.155 Entry Latency: 16 microseconds 00:08:33.155 Exit Latency: 4 microseconds 00:08:33.155 Relative Read Throughput: 0 00:08:33.155 Relative Read Latency: 0 00:08:33.155 Relative Write Throughput: 0 00:08:33.155 Relative Write Latency: 0 00:08:33.155 Idle Power: Not Reported 00:08:33.155 Active Power: Not Reported 00:08:33.155 Non-Operational Permissive Mode: Not Supported 00:08:33.155 00:08:33.155 Health Information 00:08:33.155 ================== 00:08:33.155 Critical Warnings: 00:08:33.155 Available Spare Space: OK 00:08:33.155 Temperature: OK 00:08:33.155 Device Reliability: OK 00:08:33.155 Read Only: No 00:08:33.155 Volatile Memory Backup: OK 00:08:33.155 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.155 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.155 Available Spare: 0% 00:08:33.155 Available Spare Threshold: 0% 00:08:33.155 Life Percentage Used: 0% 00:08:33.155 Data Units Read: 721 00:08:33.155 Data Units Written: 650 00:08:33.155 Host Read Commands: 33249 00:08:33.155 Host Write Commands: 32672 00:08:33.155 Controller Busy Time: 0 minutes 00:08:33.155 Power Cycles: 0 00:08:33.155 Power On Hours: 0 hours 00:08:33.155 Unsafe Shutdowns: 0 00:08:33.155 Unrecoverable Media Errors: 0 00:08:33.155 Lifetime Error Log Entries: 0 00:08:33.155 Warning Temperature Time: 0 minutes 00:08:33.155 Critical Temperature Time: 0 minutes 00:08:33.155 00:08:33.155 Number of Queues 00:08:33.155 ================ 00:08:33.155 Number of I/O Submission Queues: 64 00:08:33.155 Number of I/O Completion Queues: 64 00:08:33.155 00:08:33.155 ZNS Specific Controller Data 00:08:33.155 ============================ 00:08:33.155 Zone Append Size Limit: 0 00:08:33.155 00:08:33.155 00:08:33.155 Active Namespaces 00:08:33.155 ================= 00:08:33.155 Namespace ID:1 00:08:33.155 Error Recovery Timeout: Unlimited 00:08:33.155 Command Set Identifier: NVM (00h) 00:08:33.155 Deallocate: Supported 00:08:33.155 Deallocated/Unwritten Error: Supported 00:08:33.155 Deallocated Read Value: All 0x00 00:08:33.155 Deallocate in Write Zeroes: Not Supported 00:08:33.155 Deallocated Guard Field: 0xFFFF 00:08:33.155 Flush: Supported 00:08:33.155 Reservation: Not Supported 00:08:33.155 Namespace Sharing Capabilities: Multiple Controllers 00:08:33.155 Size (in LBAs): 262144 (1GiB) 00:08:33.155 Capacity (in LBAs): 262144 (1GiB) 00:08:33.155 Utilization (in LBAs): 262144 (1GiB) 00:08:33.155 Thin Provisioning: Not Supported 00:08:33.155 Per-NS Atomic Units: No 00:08:33.155 Maximum Single Source Range Length: 128 00:08:33.155 Maximum Copy Length: 128 00:08:33.155 Maximum Source Range Count: 128 00:08:33.155 NGUID/EUI64 Never Reused: No 00:08:33.155 Namespace Write Protected: No 00:08:33.155 Endurance group ID: 1 00:08:33.155 Number of LBA Formats: 8 00:08:33.155 Current LBA Format: LBA Format #04 00:08:33.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.155 00:08:33.155 Get Feature FDP: 00:08:33.155 ================ 00:08:33.155 Enabled: Yes 00:08:33.155 FDP configuration index: 0 00:08:33.155 00:08:33.155 FDP configurations log page 00:08:33.155 =========================== 00:08:33.155 Number of FDP configurations: 1 00:08:33.155 Version: 0 00:08:33.155 Size: 112 00:08:33.155 FDP Configuration Descriptor: 0 00:08:33.155 Descriptor Size: 96 00:08:33.155 Reclaim Group Identifier format: 2 00:08:33.155 FDP Volatile Write Cache: Not Present 00:08:33.155 FDP Configuration: Valid 00:08:33.155 Vendor Specific Size: 0 00:08:33.155 Number of Reclaim Groups: 2 00:08:33.155 Number of Recalim Unit Handles: 8 00:08:33.155 Max Placement Identifiers: 128 00:08:33.155 Number of Namespaces Suppprted: 256 00:08:33.155 Reclaim unit Nominal Size: 6000000 bytes 00:08:33.155 Estimated Reclaim Unit Time Limit: Not Reported 00:08:33.155 RUH Desc #000: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #001: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #002: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #003: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #004: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #005: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #006: RUH Type: Initially Isolated 00:08:33.155 RUH Desc #007: RUH Type: Initially Isolated 00:08:33.155 00:08:33.155 FDP reclaim unit handle usage log page 00:08:33.155 ====================================== 00:08:33.155 Number of Reclaim Unit Handles: 8 00:08:33.155 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:33.155 RUH Usage Desc #001: RUH Attributes: Unused 00:08:33.155 RUH Usage Desc #002: RUH Attributes: Unused 00:08:33.155 RUH Usage Desc #003: RUH Attributes: Unused 00:08:33.155 RUH Usage Desc #004: RUH Attributes: Unused 00:08:33.155 RUH Usage Desc #005: RUH Attributes: Unused 00:08:33.155 RUH Usage Desc #006: RUH Attributes: Unused 00:08:33.155 RUH Usage Desc #007: RUH Attributes: Unused 00:08:33.155 00:08:33.155 FDP statistics log page 00:08:33.155 ======================= 00:08:33.155 Host bytes with metadata written: 413769728 00:08:33.155 Media bytes with metadata written: 413814784 00:08:33.155 Media bytes erased: 0 00:08:33.155 00:08:33.155 FDP events log page 00:08:33.155 =================== 00:08:33.155 Number of FDP events: 0 00:08:33.155 00:08:33.155 NVM Specific Namespace Data 00:08:33.155 =========================== 00:08:33.155 Logical Block Storage Tag Mask: 0 00:08:33.155 Protection Information Capabilities: 00:08:33.155 16b Guard Protection Information Storage Tag Support: No 00:08:33.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.155 Storage Tag Check Read Support: No 00:08:33.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.155 00:08:33.155 real 0m1.400s 00:08:33.155 user 0m0.553s 00:08:33.155 sys 0m0.636s 00:08:33.155 05:55:56 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.155 05:55:56 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:33.155 ************************************ 00:08:33.155 END TEST nvme_identify 00:08:33.155 ************************************ 00:08:33.155 05:55:56 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:33.155 05:55:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.155 05:55:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.155 05:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.155 ************************************ 00:08:33.155 START TEST nvme_perf 00:08:33.155 ************************************ 00:08:33.155 05:55:56 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:33.155 05:55:56 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:34.531 Initializing NVMe Controllers 00:08:34.531 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.531 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.531 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.531 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.531 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:34.531 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:34.531 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:34.531 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:34.531 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:34.531 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:34.531 Initialization complete. Launching workers. 00:08:34.531 ======================================================== 00:08:34.531 Latency(us) 00:08:34.531 Device Information : IOPS MiB/s Average min max 00:08:34.531 PCIE (0000:00:10.0) NSID 1 from core 0: 12735.59 149.25 10051.74 5685.64 27978.85 00:08:34.531 PCIE (0000:00:11.0) NSID 1 from core 0: 12735.59 149.25 10040.52 5441.17 27182.44 00:08:34.531 PCIE (0000:00:13.0) NSID 1 from core 0: 12735.59 149.25 10028.17 4655.82 27231.70 00:08:34.531 PCIE (0000:00:12.0) NSID 1 from core 0: 12735.59 149.25 10014.67 4272.24 26615.12 00:08:34.531 PCIE (0000:00:12.0) NSID 2 from core 0: 12735.59 149.25 10001.32 3856.62 26014.55 00:08:34.531 PCIE (0000:00:12.0) NSID 3 from core 0: 12735.59 149.25 9988.44 3528.46 25436.40 00:08:34.531 ======================================================== 00:08:34.531 Total : 76413.55 895.47 10020.81 3528.46 27978.85 00:08:34.531 00:08:34.531 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:34.531 ================================================================================= 00:08:34.531 1.00000% : 8043.055us 00:08:34.531 10.00000% : 8519.680us 00:08:34.531 25.00000% : 8996.305us 00:08:34.531 50.00000% : 9472.931us 00:08:34.531 75.00000% : 10307.025us 00:08:34.531 90.00000% : 12511.418us 00:08:34.531 95.00000% : 13345.513us 00:08:34.531 98.00000% : 14715.811us 00:08:34.531 99.00000% : 17635.142us 00:08:34.531 99.50000% : 26214.400us 00:08:34.531 99.90000% : 27644.276us 00:08:34.531 99.99000% : 28001.745us 00:08:34.531 99.99900% : 28001.745us 00:08:34.531 99.99990% : 28001.745us 00:08:34.531 99.99999% : 28001.745us 00:08:34.531 00:08:34.531 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:34.531 ================================================================================= 00:08:34.531 1.00000% : 8102.633us 00:08:34.531 10.00000% : 8579.258us 00:08:34.531 25.00000% : 8996.305us 00:08:34.531 50.00000% : 9472.931us 00:08:34.531 75.00000% : 10247.447us 00:08:34.531 90.00000% : 12570.996us 00:08:34.531 95.00000% : 13226.356us 00:08:34.531 98.00000% : 14239.185us 00:08:34.531 99.00000% : 17992.611us 00:08:34.531 99.50000% : 25737.775us 00:08:34.531 99.90000% : 26929.338us 00:08:34.531 99.99000% : 27167.651us 00:08:34.531 99.99900% : 27286.807us 00:08:34.531 99.99990% : 27286.807us 00:08:34.531 99.99999% : 27286.807us 00:08:34.531 00:08:34.532 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:34.532 ================================================================================= 00:08:34.532 1.00000% : 7923.898us 00:08:34.532 10.00000% : 8519.680us 00:08:34.532 25.00000% : 8996.305us 00:08:34.532 50.00000% : 9472.931us 00:08:34.532 75.00000% : 10247.447us 00:08:34.532 90.00000% : 12570.996us 00:08:34.532 95.00000% : 13285.935us 00:08:34.532 98.00000% : 13941.295us 00:08:34.532 99.00000% : 18230.924us 00:08:34.532 99.50000% : 25737.775us 00:08:34.532 99.90000% : 26929.338us 00:08:34.532 99.99000% : 27286.807us 00:08:34.532 99.99900% : 27286.807us 00:08:34.532 99.99990% : 27286.807us 00:08:34.532 99.99999% : 27286.807us 00:08:34.532 00:08:34.532 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:34.532 ================================================================================= 00:08:34.532 1.00000% : 7626.007us 00:08:34.532 10.00000% : 8519.680us 00:08:34.532 25.00000% : 8996.305us 00:08:34.532 50.00000% : 9472.931us 00:08:34.532 75.00000% : 10247.447us 00:08:34.532 90.00000% : 12511.418us 00:08:34.532 95.00000% : 13226.356us 00:08:34.532 98.00000% : 14477.498us 00:08:34.532 99.00000% : 17515.985us 00:08:34.532 99.50000% : 25022.836us 00:08:34.532 99.90000% : 26333.556us 00:08:34.532 99.99000% : 26691.025us 00:08:34.532 99.99900% : 26691.025us 00:08:34.532 99.99990% : 26691.025us 00:08:34.532 99.99999% : 26691.025us 00:08:34.532 00:08:34.532 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:34.532 ================================================================================= 00:08:34.532 1.00000% : 7298.327us 00:08:34.532 10.00000% : 8519.680us 00:08:34.532 25.00000% : 8996.305us 00:08:34.532 50.00000% : 9472.931us 00:08:34.532 75.00000% : 10307.025us 00:08:34.532 90.00000% : 12451.840us 00:08:34.532 95.00000% : 13226.356us 00:08:34.532 98.00000% : 15192.436us 00:08:34.532 99.00000% : 17039.360us 00:08:34.532 99.50000% : 24427.055us 00:08:34.532 99.90000% : 25737.775us 00:08:34.532 99.99000% : 26095.244us 00:08:34.532 99.99900% : 26095.244us 00:08:34.532 99.99990% : 26095.244us 00:08:34.532 99.99999% : 26095.244us 00:08:34.532 00:08:34.532 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:34.532 ================================================================================= 00:08:34.532 1.00000% : 6970.647us 00:08:34.532 10.00000% : 8519.680us 00:08:34.532 25.00000% : 8996.305us 00:08:34.532 50.00000% : 9472.931us 00:08:34.532 75.00000% : 10307.025us 00:08:34.532 90.00000% : 12392.262us 00:08:34.532 95.00000% : 13226.356us 00:08:34.532 98.00000% : 15073.280us 00:08:34.532 99.00000% : 16801.047us 00:08:34.532 99.50000% : 23831.273us 00:08:34.532 99.90000% : 25141.993us 00:08:34.532 99.99000% : 25499.462us 00:08:34.532 99.99900% : 25499.462us 00:08:34.532 99.99990% : 25499.462us 00:08:34.532 99.99999% : 25499.462us 00:08:34.532 00:08:34.532 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:34.532 ============================================================================== 00:08:34.532 Range in us Cumulative IO count 00:08:34.532 5659.927 - 5689.716: 0.0079% ( 1) 00:08:34.532 5689.716 - 5719.505: 0.0236% ( 2) 00:08:34.532 5719.505 - 5749.295: 0.0471% ( 3) 00:08:34.532 5749.295 - 5779.084: 0.0550% ( 1) 00:08:34.532 5779.084 - 5808.873: 0.0707% ( 2) 00:08:34.532 5808.873 - 5838.662: 0.0864% ( 2) 00:08:34.532 5838.662 - 5868.451: 0.0942% ( 1) 00:08:34.532 5868.451 - 5898.240: 0.1178% ( 3) 00:08:34.532 5898.240 - 5928.029: 0.1335% ( 2) 00:08:34.532 5928.029 - 5957.818: 0.1413% ( 1) 00:08:34.532 5957.818 - 5987.607: 0.1649% ( 3) 00:08:34.532 5987.607 - 6017.396: 0.1727% ( 1) 00:08:34.532 6017.396 - 6047.185: 0.1884% ( 2) 00:08:34.532 6047.185 - 6076.975: 0.2041% ( 2) 00:08:34.532 6076.975 - 6106.764: 0.2198% ( 2) 00:08:34.532 6106.764 - 6136.553: 0.2356% ( 2) 00:08:34.532 6136.553 - 6166.342: 0.2434% ( 1) 00:08:34.532 6166.342 - 6196.131: 0.2670% ( 3) 00:08:34.532 6196.131 - 6225.920: 0.2748% ( 1) 00:08:34.532 6225.920 - 6255.709: 0.2984% ( 3) 00:08:34.532 6255.709 - 6285.498: 0.3062% ( 1) 00:08:34.532 6285.498 - 6315.287: 0.3219% ( 2) 00:08:34.532 6315.287 - 6345.076: 0.3455% ( 3) 00:08:34.532 6345.076 - 6374.865: 0.3612% ( 2) 00:08:34.532 6374.865 - 6404.655: 0.3690% ( 1) 00:08:34.532 6404.655 - 6434.444: 0.3847% ( 2) 00:08:34.532 6434.444 - 6464.233: 0.4004% ( 2) 00:08:34.532 6464.233 - 6494.022: 0.4161% ( 2) 00:08:34.532 6494.022 - 6523.811: 0.4397% ( 3) 00:08:34.532 6553.600 - 6583.389: 0.4554% ( 2) 00:08:34.532 6583.389 - 6613.178: 0.4711% ( 2) 00:08:34.532 6613.178 - 6642.967: 0.4868% ( 2) 00:08:34.532 6642.967 - 6672.756: 0.5025% ( 2) 00:08:34.532 7804.742 - 7864.320: 0.5182% ( 2) 00:08:34.532 7864.320 - 7923.898: 0.6203% ( 13) 00:08:34.532 7923.898 - 7983.476: 0.8951% ( 35) 00:08:34.532 7983.476 - 8043.055: 1.3584% ( 59) 00:08:34.532 8043.055 - 8102.633: 2.1200% ( 97) 00:08:34.532 8102.633 - 8162.211: 3.0857% ( 123) 00:08:34.532 8162.211 - 8221.789: 4.1614% ( 137) 00:08:34.532 8221.789 - 8281.367: 5.3942% ( 157) 00:08:34.532 8281.367 - 8340.945: 6.6661% ( 162) 00:08:34.532 8340.945 - 8400.524: 8.1187% ( 185) 00:08:34.532 8400.524 - 8460.102: 9.6655% ( 197) 00:08:34.532 8460.102 - 8519.680: 11.3065% ( 209) 00:08:34.532 8519.680 - 8579.258: 12.9711% ( 212) 00:08:34.532 8579.258 - 8638.836: 14.5415% ( 200) 00:08:34.532 8638.836 - 8698.415: 16.3395% ( 229) 00:08:34.532 8698.415 - 8757.993: 18.0041% ( 212) 00:08:34.532 8757.993 - 8817.571: 19.9513% ( 248) 00:08:34.532 8817.571 - 8877.149: 21.9692% ( 257) 00:08:34.532 8877.149 - 8936.727: 24.4268% ( 313) 00:08:34.532 8936.727 - 8996.305: 27.0886% ( 339) 00:08:34.532 8996.305 - 9055.884: 29.9623% ( 366) 00:08:34.532 9055.884 - 9115.462: 32.8910% ( 373) 00:08:34.532 9115.462 - 9175.040: 35.8511% ( 377) 00:08:34.532 9175.040 - 9234.618: 38.9133% ( 390) 00:08:34.532 9234.618 - 9294.196: 41.9677% ( 389) 00:08:34.532 9294.196 - 9353.775: 45.0298% ( 390) 00:08:34.532 9353.775 - 9413.353: 47.6759% ( 337) 00:08:34.532 9413.353 - 9472.931: 50.1727% ( 318) 00:08:34.532 9472.931 - 9532.509: 52.7246% ( 325) 00:08:34.532 9532.509 - 9592.087: 55.1351% ( 307) 00:08:34.532 9592.087 - 9651.665: 57.3257% ( 279) 00:08:34.532 9651.665 - 9711.244: 59.4771% ( 274) 00:08:34.532 9711.244 - 9770.822: 61.5813% ( 268) 00:08:34.532 9770.822 - 9830.400: 63.5129% ( 246) 00:08:34.532 9830.400 - 9889.978: 65.3973% ( 240) 00:08:34.532 9889.978 - 9949.556: 67.2896% ( 241) 00:08:34.532 9949.556 - 10009.135: 68.9620% ( 213) 00:08:34.532 10009.135 - 10068.713: 70.5009% ( 196) 00:08:34.532 10068.713 - 10128.291: 71.9300% ( 182) 00:08:34.532 10128.291 - 10187.869: 73.1705% ( 158) 00:08:34.532 10187.869 - 10247.447: 74.1599% ( 126) 00:08:34.532 10247.447 - 10307.025: 75.0000% ( 107) 00:08:34.532 10307.025 - 10366.604: 75.6124% ( 78) 00:08:34.532 10366.604 - 10426.182: 76.0992% ( 62) 00:08:34.532 10426.182 - 10485.760: 76.4918% ( 50) 00:08:34.532 10485.760 - 10545.338: 76.8452% ( 45) 00:08:34.532 10545.338 - 10604.916: 77.1278% ( 36) 00:08:34.532 10604.916 - 10664.495: 77.3791% ( 32) 00:08:34.532 10664.495 - 10724.073: 77.6382% ( 33) 00:08:34.532 10724.073 - 10783.651: 77.8737% ( 30) 00:08:34.532 10783.651 - 10843.229: 78.1014% ( 29) 00:08:34.532 10843.229 - 10902.807: 78.2977% ( 25) 00:08:34.532 10902.807 - 10962.385: 78.5019% ( 26) 00:08:34.532 10962.385 - 11021.964: 78.7531% ( 32) 00:08:34.532 11021.964 - 11081.542: 79.0358% ( 36) 00:08:34.532 11081.542 - 11141.120: 79.3263% ( 37) 00:08:34.532 11141.120 - 11200.698: 79.6247% ( 38) 00:08:34.532 11200.698 - 11260.276: 79.9702% ( 44) 00:08:34.532 11260.276 - 11319.855: 80.3078% ( 43) 00:08:34.532 11319.855 - 11379.433: 80.5905% ( 36) 00:08:34.532 11379.433 - 11439.011: 81.0066% ( 53) 00:08:34.532 11439.011 - 11498.589: 81.4698% ( 59) 00:08:34.532 11498.589 - 11558.167: 81.9253% ( 58) 00:08:34.532 11558.167 - 11617.745: 82.3649% ( 56) 00:08:34.532 11617.745 - 11677.324: 82.7968% ( 55) 00:08:34.532 11677.324 - 11736.902: 83.2522% ( 58) 00:08:34.532 11736.902 - 11796.480: 83.7233% ( 60) 00:08:34.532 11796.480 - 11856.058: 84.1944% ( 60) 00:08:34.532 11856.058 - 11915.636: 84.7362% ( 69) 00:08:34.532 11915.636 - 11975.215: 85.2780% ( 69) 00:08:34.532 11975.215 - 12034.793: 85.9218% ( 82) 00:08:34.532 12034.793 - 12094.371: 86.5264% ( 77) 00:08:34.532 12094.371 - 12153.949: 87.1231% ( 76) 00:08:34.532 12153.949 - 12213.527: 87.6806% ( 71) 00:08:34.532 12213.527 - 12273.105: 88.3166% ( 81) 00:08:34.532 12273.105 - 12332.684: 88.8976% ( 74) 00:08:34.532 12332.684 - 12392.262: 89.4237% ( 67) 00:08:34.532 12392.262 - 12451.840: 89.9733% ( 70) 00:08:34.532 12451.840 - 12511.418: 90.4758% ( 64) 00:08:34.532 12511.418 - 12570.996: 90.9783% ( 64) 00:08:34.532 12570.996 - 12630.575: 91.3081% ( 42) 00:08:34.532 12630.575 - 12690.153: 91.7557% ( 57) 00:08:34.533 12690.153 - 12749.731: 92.1325% ( 48) 00:08:34.533 12749.731 - 12809.309: 92.4702% ( 43) 00:08:34.533 12809.309 - 12868.887: 92.7842% ( 40) 00:08:34.533 12868.887 - 12928.465: 93.0512% ( 34) 00:08:34.533 12928.465 - 12988.044: 93.3260% ( 35) 00:08:34.533 12988.044 - 13047.622: 93.6479% ( 41) 00:08:34.533 13047.622 - 13107.200: 93.9306% ( 36) 00:08:34.533 13107.200 - 13166.778: 94.2447% ( 40) 00:08:34.533 13166.778 - 13226.356: 94.5116% ( 34) 00:08:34.533 13226.356 - 13285.935: 94.7943% ( 36) 00:08:34.533 13285.935 - 13345.513: 95.1005% ( 39) 00:08:34.533 13345.513 - 13405.091: 95.3596% ( 33) 00:08:34.533 13405.091 - 13464.669: 95.6030% ( 31) 00:08:34.533 13464.669 - 13524.247: 95.8621% ( 33) 00:08:34.533 13524.247 - 13583.825: 96.1291% ( 34) 00:08:34.533 13583.825 - 13643.404: 96.3254% ( 25) 00:08:34.533 13643.404 - 13702.982: 96.5609% ( 30) 00:08:34.533 13702.982 - 13762.560: 96.7415% ( 23) 00:08:34.533 13762.560 - 13822.138: 96.9221% ( 23) 00:08:34.533 13822.138 - 13881.716: 97.0713% ( 19) 00:08:34.533 13881.716 - 13941.295: 97.1891% ( 15) 00:08:34.533 13941.295 - 14000.873: 97.3068% ( 15) 00:08:34.533 14000.873 - 14060.451: 97.3697% ( 8) 00:08:34.533 14060.451 - 14120.029: 97.4874% ( 15) 00:08:34.533 14120.029 - 14179.607: 97.5817% ( 12) 00:08:34.533 14179.607 - 14239.185: 97.6445% ( 8) 00:08:34.533 14239.185 - 14298.764: 97.7230% ( 10) 00:08:34.533 14298.764 - 14358.342: 97.7858% ( 8) 00:08:34.533 14358.342 - 14417.920: 97.8251% ( 5) 00:08:34.533 14417.920 - 14477.498: 97.8722% ( 6) 00:08:34.533 14477.498 - 14537.076: 97.9271% ( 7) 00:08:34.533 14537.076 - 14596.655: 97.9585% ( 4) 00:08:34.533 14596.655 - 14656.233: 97.9742% ( 2) 00:08:34.533 14656.233 - 14715.811: 98.0292% ( 7) 00:08:34.533 14715.811 - 14775.389: 98.0606% ( 4) 00:08:34.533 14775.389 - 14834.967: 98.0920% ( 4) 00:08:34.533 14834.967 - 14894.545: 98.1234% ( 4) 00:08:34.533 14894.545 - 14954.124: 98.1627% ( 5) 00:08:34.533 14954.124 - 15013.702: 98.2019% ( 5) 00:08:34.533 15013.702 - 15073.280: 98.2177% ( 2) 00:08:34.533 15073.280 - 15132.858: 98.2726% ( 7) 00:08:34.533 15132.858 - 15192.436: 98.3119% ( 5) 00:08:34.533 15192.436 - 15252.015: 98.3433% ( 4) 00:08:34.533 15252.015 - 15371.171: 98.3982% ( 7) 00:08:34.533 15371.171 - 15490.327: 98.4532% ( 7) 00:08:34.533 15490.327 - 15609.484: 98.4846% ( 4) 00:08:34.533 15609.484 - 15728.640: 98.4925% ( 1) 00:08:34.533 15728.640 - 15847.796: 98.5082% ( 2) 00:08:34.533 15847.796 - 15966.953: 98.5317% ( 3) 00:08:34.533 15966.953 - 16086.109: 98.5788% ( 6) 00:08:34.533 16086.109 - 16205.265: 98.6181% ( 5) 00:08:34.533 16205.265 - 16324.422: 98.6652% ( 6) 00:08:34.533 16324.422 - 16443.578: 98.7045% ( 5) 00:08:34.533 16443.578 - 16562.735: 98.7516% ( 6) 00:08:34.533 16562.735 - 16681.891: 98.7908% ( 5) 00:08:34.533 16681.891 - 16801.047: 98.8379% ( 6) 00:08:34.533 16801.047 - 16920.204: 98.8851% ( 6) 00:08:34.533 16920.204 - 17039.360: 98.9243% ( 5) 00:08:34.533 17039.360 - 17158.516: 98.9714% ( 6) 00:08:34.533 17158.516 - 17277.673: 98.9950% ( 3) 00:08:34.533 17515.985 - 17635.142: 99.0028% ( 1) 00:08:34.533 17635.142 - 17754.298: 99.0185% ( 2) 00:08:34.533 17754.298 - 17873.455: 99.0578% ( 5) 00:08:34.533 17873.455 - 17992.611: 99.0813% ( 3) 00:08:34.533 17992.611 - 18111.767: 99.1128% ( 4) 00:08:34.533 18111.767 - 18230.924: 99.1363% ( 3) 00:08:34.533 18230.924 - 18350.080: 99.1756% ( 5) 00:08:34.533 18350.080 - 18469.236: 99.2148% ( 5) 00:08:34.533 18469.236 - 18588.393: 99.2462% ( 4) 00:08:34.533 18588.393 - 18707.549: 99.2776% ( 4) 00:08:34.533 18707.549 - 18826.705: 99.3169% ( 5) 00:08:34.533 18826.705 - 18945.862: 99.3483% ( 4) 00:08:34.533 18945.862 - 19065.018: 99.3954% ( 6) 00:08:34.533 19065.018 - 19184.175: 99.4190% ( 3) 00:08:34.533 19184.175 - 19303.331: 99.4504% ( 4) 00:08:34.533 19303.331 - 19422.487: 99.4818% ( 4) 00:08:34.533 19422.487 - 19541.644: 99.4975% ( 2) 00:08:34.533 26095.244 - 26214.400: 99.5210% ( 3) 00:08:34.533 26214.400 - 26333.556: 99.5524% ( 4) 00:08:34.533 26333.556 - 26452.713: 99.5839% ( 4) 00:08:34.533 26452.713 - 26571.869: 99.6153% ( 4) 00:08:34.533 26571.869 - 26691.025: 99.6624% ( 6) 00:08:34.533 26691.025 - 26810.182: 99.6859% ( 3) 00:08:34.533 26810.182 - 26929.338: 99.7173% ( 4) 00:08:34.533 26929.338 - 27048.495: 99.7487% ( 4) 00:08:34.533 27048.495 - 27167.651: 99.7880% ( 5) 00:08:34.533 27167.651 - 27286.807: 99.8116% ( 3) 00:08:34.533 27286.807 - 27405.964: 99.8430% ( 4) 00:08:34.533 27405.964 - 27525.120: 99.8901% ( 6) 00:08:34.533 27525.120 - 27644.276: 99.9136% ( 3) 00:08:34.533 27644.276 - 27763.433: 99.9450% ( 4) 00:08:34.533 27763.433 - 27882.589: 99.9843% ( 5) 00:08:34.533 27882.589 - 28001.745: 100.0000% ( 2) 00:08:34.533 00:08:34.533 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:34.533 ============================================================================== 00:08:34.533 Range in us Cumulative IO count 00:08:34.533 5421.615 - 5451.404: 0.0079% ( 1) 00:08:34.533 5451.404 - 5481.193: 0.0314% ( 3) 00:08:34.533 5481.193 - 5510.982: 0.0471% ( 2) 00:08:34.533 5510.982 - 5540.771: 0.0628% ( 2) 00:08:34.533 5540.771 - 5570.560: 0.0864% ( 3) 00:08:34.533 5570.560 - 5600.349: 0.1021% ( 2) 00:08:34.533 5600.349 - 5630.138: 0.1178% ( 2) 00:08:34.533 5630.138 - 5659.927: 0.1413% ( 3) 00:08:34.533 5659.927 - 5689.716: 0.1570% ( 2) 00:08:34.533 5689.716 - 5719.505: 0.1727% ( 2) 00:08:34.533 5719.505 - 5749.295: 0.1963% ( 3) 00:08:34.533 5749.295 - 5779.084: 0.2120% ( 2) 00:08:34.533 5779.084 - 5808.873: 0.2277% ( 2) 00:08:34.533 5808.873 - 5838.662: 0.2434% ( 2) 00:08:34.533 5838.662 - 5868.451: 0.2670% ( 3) 00:08:34.533 5868.451 - 5898.240: 0.2827% ( 2) 00:08:34.533 5898.240 - 5928.029: 0.2984% ( 2) 00:08:34.533 5928.029 - 5957.818: 0.3141% ( 2) 00:08:34.533 5957.818 - 5987.607: 0.3298% ( 2) 00:08:34.533 5987.607 - 6017.396: 0.3533% ( 3) 00:08:34.533 6017.396 - 6047.185: 0.3690% ( 2) 00:08:34.533 6047.185 - 6076.975: 0.3847% ( 2) 00:08:34.533 6076.975 - 6106.764: 0.4083% ( 3) 00:08:34.533 6106.764 - 6136.553: 0.4240% ( 2) 00:08:34.533 6136.553 - 6166.342: 0.4397% ( 2) 00:08:34.533 6166.342 - 6196.131: 0.4554% ( 2) 00:08:34.533 6196.131 - 6225.920: 0.4711% ( 2) 00:08:34.533 6225.920 - 6255.709: 0.4947% ( 3) 00:08:34.533 6255.709 - 6285.498: 0.5025% ( 1) 00:08:34.533 7804.742 - 7864.320: 0.5339% ( 4) 00:08:34.533 7864.320 - 7923.898: 0.5653% ( 4) 00:08:34.533 7923.898 - 7983.476: 0.6438% ( 10) 00:08:34.533 7983.476 - 8043.055: 0.8401% ( 25) 00:08:34.533 8043.055 - 8102.633: 1.2170% ( 48) 00:08:34.533 8102.633 - 8162.211: 1.9315% ( 91) 00:08:34.533 8162.211 - 8221.789: 2.8188% ( 113) 00:08:34.533 8221.789 - 8281.367: 3.9651% ( 146) 00:08:34.533 8281.367 - 8340.945: 5.2528% ( 164) 00:08:34.533 8340.945 - 8400.524: 6.6269% ( 175) 00:08:34.533 8400.524 - 8460.102: 8.2129% ( 202) 00:08:34.533 8460.102 - 8519.680: 9.9482% ( 221) 00:08:34.533 8519.680 - 8579.258: 11.8012% ( 236) 00:08:34.533 8579.258 - 8638.836: 13.8269% ( 258) 00:08:34.533 8638.836 - 8698.415: 15.7506% ( 245) 00:08:34.533 8698.415 - 8757.993: 17.6743% ( 245) 00:08:34.533 8757.993 - 8817.571: 19.6687% ( 254) 00:08:34.533 8817.571 - 8877.149: 21.7729% ( 268) 00:08:34.533 8877.149 - 8936.727: 23.8772% ( 268) 00:08:34.533 8936.727 - 8996.305: 26.3976% ( 321) 00:08:34.533 8996.305 - 9055.884: 29.3342% ( 374) 00:08:34.533 9055.884 - 9115.462: 32.4356% ( 395) 00:08:34.533 9115.462 - 9175.040: 35.6391% ( 408) 00:08:34.533 9175.040 - 9234.618: 38.9683% ( 424) 00:08:34.533 9234.618 - 9294.196: 42.1325% ( 403) 00:08:34.533 9294.196 - 9353.775: 45.1398% ( 383) 00:08:34.533 9353.775 - 9413.353: 48.0606% ( 372) 00:08:34.533 9413.353 - 9472.931: 50.8637% ( 357) 00:08:34.533 9472.931 - 9532.509: 53.4312% ( 327) 00:08:34.533 9532.509 - 9592.087: 55.9752% ( 324) 00:08:34.533 9592.087 - 9651.665: 58.3464% ( 302) 00:08:34.533 9651.665 - 9711.244: 60.6784% ( 297) 00:08:34.533 9711.244 - 9770.822: 62.9240% ( 286) 00:08:34.533 9770.822 - 9830.400: 65.0911% ( 276) 00:08:34.533 9830.400 - 9889.978: 67.2660% ( 277) 00:08:34.533 9889.978 - 9949.556: 69.1740% ( 243) 00:08:34.533 9949.556 - 10009.135: 70.9092% ( 221) 00:08:34.533 10009.135 - 10068.713: 72.4639% ( 198) 00:08:34.533 10068.713 - 10128.291: 73.7123% ( 159) 00:08:34.533 10128.291 - 10187.869: 74.5603% ( 108) 00:08:34.533 10187.869 - 10247.447: 75.2827% ( 92) 00:08:34.533 10247.447 - 10307.025: 75.8558% ( 73) 00:08:34.533 10307.025 - 10366.604: 76.3741% ( 66) 00:08:34.533 10366.604 - 10426.182: 76.7509% ( 48) 00:08:34.533 10426.182 - 10485.760: 77.0572% ( 39) 00:08:34.533 10485.760 - 10545.338: 77.3555% ( 38) 00:08:34.533 10545.338 - 10604.916: 77.5911% ( 30) 00:08:34.533 10604.916 - 10664.495: 77.8266% ( 30) 00:08:34.533 10664.495 - 10724.073: 78.0465% ( 28) 00:08:34.533 10724.073 - 10783.651: 78.2506% ( 26) 00:08:34.533 10783.651 - 10843.229: 78.4705% ( 28) 00:08:34.533 10843.229 - 10902.807: 78.6589% ( 24) 00:08:34.533 10902.807 - 10962.385: 78.8866% ( 29) 00:08:34.533 10962.385 - 11021.964: 79.0829% ( 25) 00:08:34.533 11021.964 - 11081.542: 79.3106% ( 29) 00:08:34.533 11081.542 - 11141.120: 79.5226% ( 27) 00:08:34.533 11141.120 - 11200.698: 79.7268% ( 26) 00:08:34.533 11200.698 - 11260.276: 79.9937% ( 34) 00:08:34.533 11260.276 - 11319.855: 80.2214% ( 29) 00:08:34.533 11319.855 - 11379.433: 80.5198% ( 38) 00:08:34.533 11379.433 - 11439.011: 80.8417% ( 41) 00:08:34.533 11439.011 - 11498.589: 81.2343% ( 50) 00:08:34.533 11498.589 - 11558.167: 81.6190% ( 49) 00:08:34.533 11558.167 - 11617.745: 82.0744% ( 58) 00:08:34.533 11617.745 - 11677.324: 82.4435% ( 47) 00:08:34.533 11677.324 - 11736.902: 82.8910% ( 57) 00:08:34.533 11736.902 - 11796.480: 83.2836% ( 50) 00:08:34.533 11796.480 - 11856.058: 83.7704% ( 62) 00:08:34.533 11856.058 - 11915.636: 84.2258% ( 58) 00:08:34.533 11915.636 - 11975.215: 84.6734% ( 57) 00:08:34.533 11975.215 - 12034.793: 85.2151% ( 69) 00:08:34.533 12034.793 - 12094.371: 85.8197% ( 77) 00:08:34.533 12094.371 - 12153.949: 86.3772% ( 71) 00:08:34.533 12153.949 - 12213.527: 86.9661% ( 75) 00:08:34.533 12213.527 - 12273.105: 87.4686% ( 64) 00:08:34.533 12273.105 - 12332.684: 88.0418% ( 73) 00:08:34.533 12332.684 - 12392.262: 88.6385% ( 76) 00:08:34.533 12392.262 - 12451.840: 89.1960% ( 71) 00:08:34.533 12451.840 - 12511.418: 89.7535% ( 71) 00:08:34.533 12511.418 - 12570.996: 90.1853% ( 55) 00:08:34.533 12570.996 - 12630.575: 90.6957% ( 65) 00:08:34.533 12630.575 - 12690.153: 91.1825% ( 62) 00:08:34.533 12690.153 - 12749.731: 91.6457% ( 59) 00:08:34.533 12749.731 - 12809.309: 92.1404% ( 63) 00:08:34.533 12809.309 - 12868.887: 92.5958% ( 58) 00:08:34.533 12868.887 - 12928.465: 93.0905% ( 63) 00:08:34.533 12928.465 - 12988.044: 93.5380% ( 57) 00:08:34.533 12988.044 - 13047.622: 93.9620% ( 54) 00:08:34.533 13047.622 - 13107.200: 94.3467% ( 49) 00:08:34.533 13107.200 - 13166.778: 94.7236% ( 48) 00:08:34.533 13166.778 - 13226.356: 95.0848% ( 46) 00:08:34.533 13226.356 - 13285.935: 95.4460% ( 46) 00:08:34.533 13285.935 - 13345.513: 95.8072% ( 46) 00:08:34.533 13345.513 - 13405.091: 96.1683% ( 46) 00:08:34.533 13405.091 - 13464.669: 96.4667% ( 38) 00:08:34.533 13464.669 - 13524.247: 96.7101% ( 31) 00:08:34.533 13524.247 - 13583.825: 96.9457% ( 30) 00:08:34.533 13583.825 - 13643.404: 97.1420% ( 25) 00:08:34.533 13643.404 - 13702.982: 97.2911% ( 19) 00:08:34.533 13702.982 - 13762.560: 97.4168% ( 16) 00:08:34.533 13762.560 - 13822.138: 97.5188% ( 13) 00:08:34.533 13822.138 - 13881.716: 97.6209% ( 13) 00:08:34.533 13881.716 - 13941.295: 97.7308% ( 14) 00:08:34.533 13941.295 - 14000.873: 97.8015% ( 9) 00:08:34.533 14000.873 - 14060.451: 97.8486% ( 6) 00:08:34.533 14060.451 - 14120.029: 97.9193% ( 9) 00:08:34.533 14120.029 - 14179.607: 97.9664% ( 6) 00:08:34.533 14179.607 - 14239.185: 98.0057% ( 5) 00:08:34.533 14239.185 - 14298.764: 98.0528% ( 6) 00:08:34.533 14298.764 - 14358.342: 98.0920% ( 5) 00:08:34.533 14358.342 - 14417.920: 98.1391% ( 6) 00:08:34.533 14417.920 - 14477.498: 98.1705% ( 4) 00:08:34.533 14477.498 - 14537.076: 98.2098% ( 5) 00:08:34.533 14537.076 - 14596.655: 98.2334% ( 3) 00:08:34.533 14596.655 - 14656.233: 98.2648% ( 4) 00:08:34.533 14656.233 - 14715.811: 98.2962% ( 4) 00:08:34.533 14715.811 - 14775.389: 98.3197% ( 3) 00:08:34.533 14775.389 - 14834.967: 98.3511% ( 4) 00:08:34.533 14834.967 - 14894.545: 98.3747% ( 3) 00:08:34.533 14894.545 - 14954.124: 98.4061% ( 4) 00:08:34.533 14954.124 - 15013.702: 98.4218% ( 2) 00:08:34.533 15013.702 - 15073.280: 98.4375% ( 2) 00:08:34.533 15073.280 - 15132.858: 98.4689% ( 4) 00:08:34.533 15132.858 - 15192.436: 98.4846% ( 2) 00:08:34.533 15192.436 - 15252.015: 98.4925% ( 1) 00:08:34.533 16086.109 - 16205.265: 98.5239% ( 4) 00:08:34.533 16205.265 - 16324.422: 98.5788% ( 7) 00:08:34.533 16324.422 - 16443.578: 98.6259% ( 6) 00:08:34.533 16443.578 - 16562.735: 98.6809% ( 7) 00:08:34.533 16562.735 - 16681.891: 98.7280% ( 6) 00:08:34.533 16681.891 - 16801.047: 98.7830% ( 7) 00:08:34.533 16801.047 - 16920.204: 98.8379% ( 7) 00:08:34.533 16920.204 - 17039.360: 98.8929% ( 7) 00:08:34.533 17039.360 - 17158.516: 98.9479% ( 7) 00:08:34.533 17158.516 - 17277.673: 98.9950% ( 6) 00:08:34.533 17873.455 - 17992.611: 99.0264% ( 4) 00:08:34.533 17992.611 - 18111.767: 99.0578% ( 4) 00:08:34.533 18111.767 - 18230.924: 99.0892% ( 4) 00:08:34.533 18230.924 - 18350.080: 99.1285% ( 5) 00:08:34.533 18350.080 - 18469.236: 99.1599% ( 4) 00:08:34.533 18469.236 - 18588.393: 99.1913% ( 4) 00:08:34.533 18588.393 - 18707.549: 99.2227% ( 4) 00:08:34.533 18707.549 - 18826.705: 99.2541% ( 4) 00:08:34.533 18826.705 - 18945.862: 99.2855% ( 4) 00:08:34.533 18945.862 - 19065.018: 99.3169% ( 4) 00:08:34.533 19065.018 - 19184.175: 99.3483% ( 4) 00:08:34.533 19184.175 - 19303.331: 99.3876% ( 5) 00:08:34.533 19303.331 - 19422.487: 99.4190% ( 4) 00:08:34.533 19422.487 - 19541.644: 99.4504% ( 4) 00:08:34.533 19541.644 - 19660.800: 99.4818% ( 4) 00:08:34.533 19660.800 - 19779.956: 99.4975% ( 2) 00:08:34.533 25618.618 - 25737.775: 99.5289% ( 4) 00:08:34.533 25737.775 - 25856.931: 99.5682% ( 5) 00:08:34.533 25856.931 - 25976.087: 99.6074% ( 5) 00:08:34.533 25976.087 - 26095.244: 99.6467% ( 5) 00:08:34.533 26095.244 - 26214.400: 99.6859% ( 5) 00:08:34.533 26214.400 - 26333.556: 99.7252% ( 5) 00:08:34.533 26333.556 - 26452.713: 99.7644% ( 5) 00:08:34.533 26452.713 - 26571.869: 99.8037% ( 5) 00:08:34.533 26571.869 - 26691.025: 99.8351% ( 4) 00:08:34.533 26691.025 - 26810.182: 99.8822% ( 6) 00:08:34.533 26810.182 - 26929.338: 99.9215% ( 5) 00:08:34.533 26929.338 - 27048.495: 99.9529% ( 4) 00:08:34.533 27048.495 - 27167.651: 99.9921% ( 5) 00:08:34.533 27167.651 - 27286.807: 100.0000% ( 1) 00:08:34.533 00:08:34.533 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:34.533 ============================================================================== 00:08:34.533 Range in us Cumulative IO count 00:08:34.533 4647.098 - 4676.887: 0.0157% ( 2) 00:08:34.533 4676.887 - 4706.676: 0.0314% ( 2) 00:08:34.533 4706.676 - 4736.465: 0.0471% ( 2) 00:08:34.533 4736.465 - 4766.255: 0.0707% ( 3) 00:08:34.533 4766.255 - 4796.044: 0.0942% ( 3) 00:08:34.533 4796.044 - 4825.833: 0.1099% ( 2) 00:08:34.533 4825.833 - 4855.622: 0.1256% ( 2) 00:08:34.533 4855.622 - 4885.411: 0.1492% ( 3) 00:08:34.533 4885.411 - 4915.200: 0.1649% ( 2) 00:08:34.533 4915.200 - 4944.989: 0.1727% ( 1) 00:08:34.533 4944.989 - 4974.778: 0.1963% ( 3) 00:08:34.533 4974.778 - 5004.567: 0.2120% ( 2) 00:08:34.533 5004.567 - 5034.356: 0.2277% ( 2) 00:08:34.533 5034.356 - 5064.145: 0.2513% ( 3) 00:08:34.533 5064.145 - 5093.935: 0.2670% ( 2) 00:08:34.533 5093.935 - 5123.724: 0.2827% ( 2) 00:08:34.533 5123.724 - 5153.513: 0.3062% ( 3) 00:08:34.533 5153.513 - 5183.302: 0.3219% ( 2) 00:08:34.533 5183.302 - 5213.091: 0.3376% ( 2) 00:08:34.533 5213.091 - 5242.880: 0.3612% ( 3) 00:08:34.533 5242.880 - 5272.669: 0.3690% ( 1) 00:08:34.533 5272.669 - 5302.458: 0.3847% ( 2) 00:08:34.533 5302.458 - 5332.247: 0.4083% ( 3) 00:08:34.533 5332.247 - 5362.036: 0.4240% ( 2) 00:08:34.533 5362.036 - 5391.825: 0.4397% ( 2) 00:08:34.533 5391.825 - 5421.615: 0.4633% ( 3) 00:08:34.533 5421.615 - 5451.404: 0.4790% ( 2) 00:08:34.533 5451.404 - 5481.193: 0.4868% ( 1) 00:08:34.533 5481.193 - 5510.982: 0.5025% ( 2) 00:08:34.533 7089.804 - 7119.593: 0.5104% ( 1) 00:08:34.533 7119.593 - 7149.382: 0.5261% ( 2) 00:08:34.533 7149.382 - 7179.171: 0.5496% ( 3) 00:08:34.533 7179.171 - 7208.960: 0.5653% ( 2) 00:08:34.533 7208.960 - 7238.749: 0.5810% ( 2) 00:08:34.533 7238.749 - 7268.538: 0.5967% ( 2) 00:08:34.533 7268.538 - 7298.327: 0.6124% ( 2) 00:08:34.533 7298.327 - 7328.116: 0.6281% ( 2) 00:08:34.533 7328.116 - 7357.905: 0.6517% ( 3) 00:08:34.533 7357.905 - 7387.695: 0.6674% ( 2) 00:08:34.533 7387.695 - 7417.484: 0.6831% ( 2) 00:08:34.533 7417.484 - 7447.273: 0.6988% ( 2) 00:08:34.533 7447.273 - 7477.062: 0.7145% ( 2) 00:08:34.534 7477.062 - 7506.851: 0.7302% ( 2) 00:08:34.534 7506.851 - 7536.640: 0.7538% ( 3) 00:08:34.534 7536.640 - 7566.429: 0.7695% ( 2) 00:08:34.534 7566.429 - 7596.218: 0.7852% ( 2) 00:08:34.534 7596.218 - 7626.007: 0.8087% ( 3) 00:08:34.534 7626.007 - 7685.585: 0.8401% ( 4) 00:08:34.534 7685.585 - 7745.164: 0.8794% ( 5) 00:08:34.534 7745.164 - 7804.742: 0.9187% ( 5) 00:08:34.534 7804.742 - 7864.320: 0.9501% ( 4) 00:08:34.534 7864.320 - 7923.898: 1.0129% ( 8) 00:08:34.534 7923.898 - 7983.476: 1.1307% ( 15) 00:08:34.534 7983.476 - 8043.055: 1.3191% ( 24) 00:08:34.534 8043.055 - 8102.633: 1.6253% ( 39) 00:08:34.534 8102.633 - 8162.211: 2.2613% ( 81) 00:08:34.534 8162.211 - 8221.789: 3.0465% ( 100) 00:08:34.534 8221.789 - 8281.367: 4.1222% ( 137) 00:08:34.534 8281.367 - 8340.945: 5.2999% ( 150) 00:08:34.534 8340.945 - 8400.524: 6.7996% ( 191) 00:08:34.534 8400.524 - 8460.102: 8.3072% ( 192) 00:08:34.534 8460.102 - 8519.680: 10.0581% ( 223) 00:08:34.534 8519.680 - 8579.258: 11.8326% ( 226) 00:08:34.534 8579.258 - 8638.836: 13.7484% ( 244) 00:08:34.534 8638.836 - 8698.415: 15.7035% ( 249) 00:08:34.534 8698.415 - 8757.993: 17.6665% ( 250) 00:08:34.534 8757.993 - 8817.571: 19.6451% ( 252) 00:08:34.534 8817.571 - 8877.149: 21.7337% ( 266) 00:08:34.534 8877.149 - 8936.727: 24.0185% ( 291) 00:08:34.534 8936.727 - 8996.305: 26.5546% ( 323) 00:08:34.534 8996.305 - 9055.884: 29.4284% ( 366) 00:08:34.534 9055.884 - 9115.462: 32.5220% ( 394) 00:08:34.534 9115.462 - 9175.040: 35.7098% ( 406) 00:08:34.534 9175.040 - 9234.618: 38.9683% ( 415) 00:08:34.534 9234.618 - 9294.196: 42.0854% ( 397) 00:08:34.534 9294.196 - 9353.775: 45.1084% ( 385) 00:08:34.534 9353.775 - 9413.353: 47.9742% ( 365) 00:08:34.534 9413.353 - 9472.931: 50.7381% ( 352) 00:08:34.534 9472.931 - 9532.509: 53.3841% ( 337) 00:08:34.534 9532.509 - 9592.087: 55.7946% ( 307) 00:08:34.534 9592.087 - 9651.665: 58.1815% ( 304) 00:08:34.534 9651.665 - 9711.244: 60.4350% ( 287) 00:08:34.534 9711.244 - 9770.822: 62.6570% ( 283) 00:08:34.534 9770.822 - 9830.400: 64.7378% ( 265) 00:08:34.534 9830.400 - 9889.978: 66.9362% ( 280) 00:08:34.534 9889.978 - 9949.556: 68.8207% ( 240) 00:08:34.534 9949.556 - 10009.135: 70.6187% ( 229) 00:08:34.534 10009.135 - 10068.713: 72.1812% ( 199) 00:08:34.534 10068.713 - 10128.291: 73.5867% ( 179) 00:08:34.534 10128.291 - 10187.869: 74.6859% ( 140) 00:08:34.534 10187.869 - 10247.447: 75.4947% ( 103) 00:08:34.534 10247.447 - 10307.025: 76.1385% ( 82) 00:08:34.534 10307.025 - 10366.604: 76.6803% ( 69) 00:08:34.534 10366.604 - 10426.182: 77.1357% ( 58) 00:08:34.534 10426.182 - 10485.760: 77.5204% ( 49) 00:08:34.534 10485.760 - 10545.338: 77.8109% ( 37) 00:08:34.534 10545.338 - 10604.916: 78.0386% ( 29) 00:08:34.534 10604.916 - 10664.495: 78.2820% ( 31) 00:08:34.534 10664.495 - 10724.073: 78.5097% ( 29) 00:08:34.534 10724.073 - 10783.651: 78.7217% ( 27) 00:08:34.534 10783.651 - 10843.229: 78.8866% ( 21) 00:08:34.534 10843.229 - 10902.807: 79.0280% ( 18) 00:08:34.534 10902.807 - 10962.385: 79.1457% ( 15) 00:08:34.534 10962.385 - 11021.964: 79.3185% ( 22) 00:08:34.534 11021.964 - 11081.542: 79.5069% ( 24) 00:08:34.534 11081.542 - 11141.120: 79.7346% ( 29) 00:08:34.534 11141.120 - 11200.698: 79.9623% ( 29) 00:08:34.534 11200.698 - 11260.276: 80.1586% ( 25) 00:08:34.534 11260.276 - 11319.855: 80.3863% ( 29) 00:08:34.534 11319.855 - 11379.433: 80.7004% ( 40) 00:08:34.534 11379.433 - 11439.011: 81.0616% ( 46) 00:08:34.534 11439.011 - 11498.589: 81.4777% ( 53) 00:08:34.534 11498.589 - 11558.167: 81.8938% ( 53) 00:08:34.534 11558.167 - 11617.745: 82.3178% ( 54) 00:08:34.534 11617.745 - 11677.324: 82.7497% ( 55) 00:08:34.534 11677.324 - 11736.902: 83.2051% ( 58) 00:08:34.534 11736.902 - 11796.480: 83.6291% ( 54) 00:08:34.534 11796.480 - 11856.058: 84.0766% ( 57) 00:08:34.534 11856.058 - 11915.636: 84.5242% ( 57) 00:08:34.534 11915.636 - 11975.215: 84.9717% ( 57) 00:08:34.534 11975.215 - 12034.793: 85.5214% ( 70) 00:08:34.534 12034.793 - 12094.371: 86.0396% ( 66) 00:08:34.534 12094.371 - 12153.949: 86.6128% ( 73) 00:08:34.534 12153.949 - 12213.527: 87.2095% ( 76) 00:08:34.534 12213.527 - 12273.105: 87.7120% ( 64) 00:08:34.534 12273.105 - 12332.684: 88.2538% ( 69) 00:08:34.534 12332.684 - 12392.262: 88.7877% ( 68) 00:08:34.534 12392.262 - 12451.840: 89.3059% ( 66) 00:08:34.534 12451.840 - 12511.418: 89.8398% ( 68) 00:08:34.534 12511.418 - 12570.996: 90.3188% ( 61) 00:08:34.534 12570.996 - 12630.575: 90.7820% ( 59) 00:08:34.534 12630.575 - 12690.153: 91.2296% ( 57) 00:08:34.534 12690.153 - 12749.731: 91.6614% ( 55) 00:08:34.534 12749.731 - 12809.309: 92.1011% ( 56) 00:08:34.534 12809.309 - 12868.887: 92.4937% ( 50) 00:08:34.534 12868.887 - 12928.465: 92.9020% ( 52) 00:08:34.534 12928.465 - 12988.044: 93.3182% ( 53) 00:08:34.534 12988.044 - 13047.622: 93.7421% ( 54) 00:08:34.534 13047.622 - 13107.200: 94.1426% ( 51) 00:08:34.534 13107.200 - 13166.778: 94.5430% ( 51) 00:08:34.534 13166.778 - 13226.356: 94.9513% ( 52) 00:08:34.534 13226.356 - 13285.935: 95.3518% ( 51) 00:08:34.534 13285.935 - 13345.513: 95.7601% ( 52) 00:08:34.534 13345.513 - 13405.091: 96.1291% ( 47) 00:08:34.534 13405.091 - 13464.669: 96.5060% ( 48) 00:08:34.534 13464.669 - 13524.247: 96.8200% ( 40) 00:08:34.534 13524.247 - 13583.825: 97.1341% ( 40) 00:08:34.534 13583.825 - 13643.404: 97.3697% ( 30) 00:08:34.534 13643.404 - 13702.982: 97.5581% ( 24) 00:08:34.534 13702.982 - 13762.560: 97.6837% ( 16) 00:08:34.534 13762.560 - 13822.138: 97.8094% ( 16) 00:08:34.534 13822.138 - 13881.716: 97.9271% ( 15) 00:08:34.534 13881.716 - 13941.295: 98.0292% ( 13) 00:08:34.534 13941.295 - 14000.873: 98.1156% ( 11) 00:08:34.534 14000.873 - 14060.451: 98.1784% ( 8) 00:08:34.534 14060.451 - 14120.029: 98.2255% ( 6) 00:08:34.534 14120.029 - 14179.607: 98.2648% ( 5) 00:08:34.534 14179.607 - 14239.185: 98.2962% ( 4) 00:08:34.534 14239.185 - 14298.764: 98.3276% ( 4) 00:08:34.534 14298.764 - 14358.342: 98.3511% ( 3) 00:08:34.534 14358.342 - 14417.920: 98.3825% ( 4) 00:08:34.534 14417.920 - 14477.498: 98.4061% ( 3) 00:08:34.534 14477.498 - 14537.076: 98.4375% ( 4) 00:08:34.534 14537.076 - 14596.655: 98.4611% ( 3) 00:08:34.534 14596.655 - 14656.233: 98.4846% ( 3) 00:08:34.534 14656.233 - 14715.811: 98.4925% ( 1) 00:08:34.534 16324.422 - 16443.578: 98.5003% ( 1) 00:08:34.534 16443.578 - 16562.735: 98.5474% ( 6) 00:08:34.534 16562.735 - 16681.891: 98.5867% ( 5) 00:08:34.534 16681.891 - 16801.047: 98.6338% ( 6) 00:08:34.534 16801.047 - 16920.204: 98.6809% ( 6) 00:08:34.534 16920.204 - 17039.360: 98.7280% ( 6) 00:08:34.534 17039.360 - 17158.516: 98.7751% ( 6) 00:08:34.534 17158.516 - 17277.673: 98.8301% ( 7) 00:08:34.534 17277.673 - 17396.829: 98.8772% ( 6) 00:08:34.534 17396.829 - 17515.985: 98.9322% ( 7) 00:08:34.534 17515.985 - 17635.142: 98.9871% ( 7) 00:08:34.534 17635.142 - 17754.298: 98.9950% ( 1) 00:08:34.534 18111.767 - 18230.924: 99.0185% ( 3) 00:08:34.534 18230.924 - 18350.080: 99.0578% ( 5) 00:08:34.534 18350.080 - 18469.236: 99.0892% ( 4) 00:08:34.534 18469.236 - 18588.393: 99.1128% ( 3) 00:08:34.534 18588.393 - 18707.549: 99.1363% ( 3) 00:08:34.534 18707.549 - 18826.705: 99.1756% ( 5) 00:08:34.534 18826.705 - 18945.862: 99.2148% ( 5) 00:08:34.534 18945.862 - 19065.018: 99.2541% ( 5) 00:08:34.534 19065.018 - 19184.175: 99.2933% ( 5) 00:08:34.534 19184.175 - 19303.331: 99.3326% ( 5) 00:08:34.534 19303.331 - 19422.487: 99.3562% ( 3) 00:08:34.534 19422.487 - 19541.644: 99.3954% ( 5) 00:08:34.534 19541.644 - 19660.800: 99.4268% ( 4) 00:08:34.534 19660.800 - 19779.956: 99.4661% ( 5) 00:08:34.534 19779.956 - 19899.113: 99.4975% ( 4) 00:08:34.534 25618.618 - 25737.775: 99.5289% ( 4) 00:08:34.534 25737.775 - 25856.931: 99.5682% ( 5) 00:08:34.534 25856.931 - 25976.087: 99.6074% ( 5) 00:08:34.534 25976.087 - 26095.244: 99.6467% ( 5) 00:08:34.534 26095.244 - 26214.400: 99.6859% ( 5) 00:08:34.534 26214.400 - 26333.556: 99.7252% ( 5) 00:08:34.534 26333.556 - 26452.713: 99.7566% ( 4) 00:08:34.534 26452.713 - 26571.869: 99.7959% ( 5) 00:08:34.534 26571.869 - 26691.025: 99.8273% ( 4) 00:08:34.534 26691.025 - 26810.182: 99.8665% ( 5) 00:08:34.534 26810.182 - 26929.338: 99.9058% ( 5) 00:08:34.534 26929.338 - 27048.495: 99.9372% ( 4) 00:08:34.534 27048.495 - 27167.651: 99.9764% ( 5) 00:08:34.534 27167.651 - 27286.807: 100.0000% ( 3) 00:08:34.534 00:08:34.534 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:34.534 ============================================================================== 00:08:34.534 Range in us Cumulative IO count 00:08:34.534 4259.840 - 4289.629: 0.0157% ( 2) 00:08:34.534 4289.629 - 4319.418: 0.0314% ( 2) 00:08:34.534 4319.418 - 4349.207: 0.0471% ( 2) 00:08:34.534 4349.207 - 4378.996: 0.0550% ( 1) 00:08:34.534 4378.996 - 4408.785: 0.0864% ( 4) 00:08:34.534 4408.785 - 4438.575: 0.1099% ( 3) 00:08:34.534 4438.575 - 4468.364: 0.1256% ( 2) 00:08:34.534 4468.364 - 4498.153: 0.1492% ( 3) 00:08:34.534 4498.153 - 4527.942: 0.1649% ( 2) 00:08:34.534 4527.942 - 4557.731: 0.1806% ( 2) 00:08:34.534 4557.731 - 4587.520: 0.2041% ( 3) 00:08:34.534 4587.520 - 4617.309: 0.2198% ( 2) 00:08:34.534 4617.309 - 4647.098: 0.2356% ( 2) 00:08:34.534 4647.098 - 4676.887: 0.2513% ( 2) 00:08:34.534 4676.887 - 4706.676: 0.2591% ( 1) 00:08:34.534 4706.676 - 4736.465: 0.2827% ( 3) 00:08:34.534 4736.465 - 4766.255: 0.2984% ( 2) 00:08:34.534 4766.255 - 4796.044: 0.3141% ( 2) 00:08:34.534 4796.044 - 4825.833: 0.3376% ( 3) 00:08:34.534 4825.833 - 4855.622: 0.3533% ( 2) 00:08:34.534 4855.622 - 4885.411: 0.3690% ( 2) 00:08:34.534 4885.411 - 4915.200: 0.3926% ( 3) 00:08:34.534 4915.200 - 4944.989: 0.4083% ( 2) 00:08:34.534 4944.989 - 4974.778: 0.4240% ( 2) 00:08:34.534 4974.778 - 5004.567: 0.4476% ( 3) 00:08:34.534 5004.567 - 5034.356: 0.4633% ( 2) 00:08:34.534 5034.356 - 5064.145: 0.4790% ( 2) 00:08:34.534 5064.145 - 5093.935: 0.4947% ( 2) 00:08:34.534 5093.935 - 5123.724: 0.5025% ( 1) 00:08:34.534 6791.913 - 6821.702: 0.5104% ( 1) 00:08:34.534 6821.702 - 6851.491: 0.5339% ( 3) 00:08:34.534 6851.491 - 6881.280: 0.5496% ( 2) 00:08:34.534 6881.280 - 6911.069: 0.5653% ( 2) 00:08:34.534 6911.069 - 6940.858: 0.5810% ( 2) 00:08:34.534 6940.858 - 6970.647: 0.6046% ( 3) 00:08:34.534 6970.647 - 7000.436: 0.6281% ( 3) 00:08:34.534 7000.436 - 7030.225: 0.6438% ( 2) 00:08:34.534 7030.225 - 7060.015: 0.6595% ( 2) 00:08:34.534 7060.015 - 7089.804: 0.6753% ( 2) 00:08:34.534 7089.804 - 7119.593: 0.6988% ( 3) 00:08:34.534 7119.593 - 7149.382: 0.7145% ( 2) 00:08:34.534 7149.382 - 7179.171: 0.7302% ( 2) 00:08:34.534 7179.171 - 7208.960: 0.7538% ( 3) 00:08:34.534 7208.960 - 7238.749: 0.7695% ( 2) 00:08:34.534 7238.749 - 7268.538: 0.7852% ( 2) 00:08:34.534 7268.538 - 7298.327: 0.8087% ( 3) 00:08:34.534 7298.327 - 7328.116: 0.8244% ( 2) 00:08:34.534 7328.116 - 7357.905: 0.8401% ( 2) 00:08:34.534 7357.905 - 7387.695: 0.8637% ( 3) 00:08:34.534 7387.695 - 7417.484: 0.8794% ( 2) 00:08:34.534 7417.484 - 7447.273: 0.8951% ( 2) 00:08:34.534 7447.273 - 7477.062: 0.9108% ( 2) 00:08:34.534 7477.062 - 7506.851: 0.9344% ( 3) 00:08:34.534 7506.851 - 7536.640: 0.9501% ( 2) 00:08:34.534 7536.640 - 7566.429: 0.9658% ( 2) 00:08:34.534 7566.429 - 7596.218: 0.9893% ( 3) 00:08:34.534 7596.218 - 7626.007: 1.0050% ( 2) 00:08:34.534 7864.320 - 7923.898: 1.0207% ( 2) 00:08:34.534 7923.898 - 7983.476: 1.1542% ( 17) 00:08:34.534 7983.476 - 8043.055: 1.3269% ( 22) 00:08:34.534 8043.055 - 8102.633: 1.6489% ( 41) 00:08:34.534 8102.633 - 8162.211: 2.2692% ( 79) 00:08:34.534 8162.211 - 8221.789: 3.0308% ( 97) 00:08:34.534 8221.789 - 8281.367: 4.1300% ( 140) 00:08:34.534 8281.367 - 8340.945: 5.3235% ( 152) 00:08:34.534 8340.945 - 8400.524: 6.7682% ( 184) 00:08:34.534 8400.524 - 8460.102: 8.2836% ( 193) 00:08:34.534 8460.102 - 8519.680: 10.0581% ( 226) 00:08:34.534 8519.680 - 8579.258: 11.8247% ( 225) 00:08:34.534 8579.258 - 8638.836: 13.7877% ( 250) 00:08:34.534 8638.836 - 8698.415: 15.7114% ( 245) 00:08:34.534 8698.415 - 8757.993: 17.6508% ( 247) 00:08:34.534 8757.993 - 8817.571: 19.7079% ( 262) 00:08:34.534 8817.571 - 8877.149: 21.7886% ( 265) 00:08:34.534 8877.149 - 8936.727: 23.9871% ( 280) 00:08:34.534 8936.727 - 8996.305: 26.5311% ( 324) 00:08:34.534 8996.305 - 9055.884: 29.4205% ( 368) 00:08:34.534 9055.884 - 9115.462: 32.5063% ( 393) 00:08:34.534 9115.462 - 9175.040: 35.7334% ( 411) 00:08:34.534 9175.040 - 9234.618: 38.8034% ( 391) 00:08:34.534 9234.618 - 9294.196: 41.9598% ( 402) 00:08:34.534 9294.196 - 9353.775: 44.9356% ( 379) 00:08:34.534 9353.775 - 9413.353: 47.7544% ( 359) 00:08:34.534 9413.353 - 9472.931: 50.5104% ( 351) 00:08:34.534 9472.931 - 9532.509: 53.1878% ( 341) 00:08:34.534 9532.509 - 9592.087: 55.5905% ( 306) 00:08:34.534 9592.087 - 9651.665: 58.0481% ( 313) 00:08:34.534 9651.665 - 9711.244: 60.3172% ( 289) 00:08:34.534 9711.244 - 9770.822: 62.5393% ( 283) 00:08:34.534 9770.822 - 9830.400: 64.7927% ( 287) 00:08:34.534 9830.400 - 9889.978: 66.8891% ( 267) 00:08:34.534 9889.978 - 9949.556: 68.7971% ( 243) 00:08:34.534 9949.556 - 10009.135: 70.5952% ( 229) 00:08:34.534 10009.135 - 10068.713: 72.1734% ( 201) 00:08:34.534 10068.713 - 10128.291: 73.5474% ( 175) 00:08:34.534 10128.291 - 10187.869: 74.5917% ( 133) 00:08:34.534 10187.869 - 10247.447: 75.3455% ( 96) 00:08:34.534 10247.447 - 10307.025: 75.9108% ( 72) 00:08:34.534 10307.025 - 10366.604: 76.5075% ( 76) 00:08:34.534 10366.604 - 10426.182: 76.9629% ( 58) 00:08:34.534 10426.182 - 10485.760: 77.3163% ( 45) 00:08:34.534 10485.760 - 10545.338: 77.6460% ( 42) 00:08:34.534 10545.338 - 10604.916: 77.9523% ( 39) 00:08:34.534 10604.916 - 10664.495: 78.2271% ( 35) 00:08:34.534 10664.495 - 10724.073: 78.4705% ( 31) 00:08:34.534 10724.073 - 10783.651: 78.7217% ( 32) 00:08:34.534 10783.651 - 10843.229: 78.8945% ( 22) 00:08:34.534 10843.229 - 10902.807: 79.0594% ( 21) 00:08:34.534 10902.807 - 10962.385: 79.2164% ( 20) 00:08:34.534 10962.385 - 11021.964: 79.3734% ( 20) 00:08:34.534 11021.964 - 11081.542: 79.5933% ( 28) 00:08:34.534 11081.542 - 11141.120: 79.8602% ( 34) 00:08:34.534 11141.120 - 11200.698: 80.1351% ( 35) 00:08:34.534 11200.698 - 11260.276: 80.3942% ( 33) 00:08:34.534 11260.276 - 11319.855: 80.6925% ( 38) 00:08:34.534 11319.855 - 11379.433: 80.9595% ( 34) 00:08:34.534 11379.433 - 11439.011: 81.2971% ( 43) 00:08:34.534 11439.011 - 11498.589: 81.7054% ( 52) 00:08:34.534 11498.589 - 11558.167: 82.1372% ( 55) 00:08:34.534 11558.167 - 11617.745: 82.6162% ( 61) 00:08:34.534 11617.745 - 11677.324: 83.0952% ( 61) 00:08:34.534 11677.324 - 11736.902: 83.5506% ( 58) 00:08:34.534 11736.902 - 11796.480: 84.0531% ( 64) 00:08:34.534 11796.480 - 11856.058: 84.5085% ( 58) 00:08:34.534 11856.058 - 11915.636: 85.0110% ( 64) 00:08:34.534 11915.636 - 11975.215: 85.5135% ( 64) 00:08:34.534 11975.215 - 12034.793: 86.0474% ( 68) 00:08:34.534 12034.793 - 12094.371: 86.6285% ( 74) 00:08:34.534 12094.371 - 12153.949: 87.2409% ( 78) 00:08:34.534 12153.949 - 12213.527: 87.7984% ( 71) 00:08:34.534 12213.527 - 12273.105: 88.2930% ( 63) 00:08:34.534 12273.105 - 12332.684: 88.7955% ( 64) 00:08:34.534 12332.684 - 12392.262: 89.3295% ( 68) 00:08:34.534 12392.262 - 12451.840: 89.7927% ( 59) 00:08:34.534 12451.840 - 12511.418: 90.3345% ( 69) 00:08:34.534 12511.418 - 12570.996: 90.7428% ( 52) 00:08:34.534 12570.996 - 12630.575: 91.2060% ( 59) 00:08:34.534 12630.575 - 12690.153: 91.6222% ( 53) 00:08:34.534 12690.153 - 12749.731: 91.9991% ( 48) 00:08:34.534 12749.731 - 12809.309: 92.3445% ( 44) 00:08:34.534 12809.309 - 12868.887: 92.7214% ( 48) 00:08:34.534 12868.887 - 12928.465: 93.1062% ( 49) 00:08:34.534 12928.465 - 12988.044: 93.4987% ( 50) 00:08:34.534 12988.044 - 13047.622: 93.8521% ( 45) 00:08:34.534 13047.622 - 13107.200: 94.2525% ( 51) 00:08:34.534 13107.200 - 13166.778: 94.6530% ( 51) 00:08:34.534 13166.778 - 13226.356: 95.0220% ( 47) 00:08:34.535 13226.356 - 13285.935: 95.4224% ( 51) 00:08:34.535 13285.935 - 13345.513: 95.7836% ( 46) 00:08:34.535 13345.513 - 13405.091: 96.1291% ( 44) 00:08:34.535 13405.091 - 13464.669: 96.4353% ( 39) 00:08:34.535 13464.669 - 13524.247: 96.7023% ( 34) 00:08:34.535 13524.247 - 13583.825: 96.9300% ( 29) 00:08:34.535 13583.825 - 13643.404: 97.1891% ( 33) 00:08:34.535 13643.404 - 13702.982: 97.3540% ( 21) 00:08:34.535 13702.982 - 13762.560: 97.4717% ( 15) 00:08:34.535 13762.560 - 13822.138: 97.5738% ( 13) 00:08:34.535 13822.138 - 13881.716: 97.6837% ( 14) 00:08:34.535 13881.716 - 13941.295: 97.7701% ( 11) 00:08:34.535 13941.295 - 14000.873: 97.8486% ( 10) 00:08:34.535 14000.873 - 14060.451: 97.9114% ( 8) 00:08:34.535 14060.451 - 14120.029: 97.9585% ( 6) 00:08:34.535 14120.029 - 14179.607: 97.9821% ( 3) 00:08:34.535 14179.607 - 14239.185: 97.9899% ( 1) 00:08:34.535 14358.342 - 14417.920: 97.9978% ( 1) 00:08:34.535 14417.920 - 14477.498: 98.0292% ( 4) 00:08:34.535 14477.498 - 14537.076: 98.0449% ( 2) 00:08:34.535 14537.076 - 14596.655: 98.0685% ( 3) 00:08:34.535 14596.655 - 14656.233: 98.0920% ( 3) 00:08:34.535 14656.233 - 14715.811: 98.1234% ( 4) 00:08:34.535 14715.811 - 14775.389: 98.1470% ( 3) 00:08:34.535 14775.389 - 14834.967: 98.1784% ( 4) 00:08:34.535 14834.967 - 14894.545: 98.2019% ( 3) 00:08:34.535 14894.545 - 14954.124: 98.2255% ( 3) 00:08:34.535 14954.124 - 15013.702: 98.2569% ( 4) 00:08:34.535 15013.702 - 15073.280: 98.2883% ( 4) 00:08:34.535 15073.280 - 15132.858: 98.3119% ( 3) 00:08:34.535 15132.858 - 15192.436: 98.3354% ( 3) 00:08:34.535 15192.436 - 15252.015: 98.3668% ( 4) 00:08:34.535 15252.015 - 15371.171: 98.4218% ( 7) 00:08:34.535 15371.171 - 15490.327: 98.4768% ( 7) 00:08:34.535 15490.327 - 15609.484: 98.4925% ( 2) 00:08:34.535 15966.953 - 16086.109: 98.5239% ( 4) 00:08:34.535 16086.109 - 16205.265: 98.5710% ( 6) 00:08:34.535 16205.265 - 16324.422: 98.6259% ( 7) 00:08:34.535 16324.422 - 16443.578: 98.6809% ( 7) 00:08:34.535 16443.578 - 16562.735: 98.7280% ( 6) 00:08:34.535 16562.735 - 16681.891: 98.7830% ( 7) 00:08:34.535 16681.891 - 16801.047: 98.8379% ( 7) 00:08:34.535 16801.047 - 16920.204: 98.8929% ( 7) 00:08:34.535 16920.204 - 17039.360: 98.9479% ( 7) 00:08:34.535 17039.360 - 17158.516: 98.9950% ( 6) 00:08:34.535 17396.829 - 17515.985: 99.0028% ( 1) 00:08:34.535 17515.985 - 17635.142: 99.0264% ( 3) 00:08:34.535 17635.142 - 17754.298: 99.0656% ( 5) 00:08:34.535 17754.298 - 17873.455: 99.1049% ( 5) 00:08:34.535 17873.455 - 17992.611: 99.1442% ( 5) 00:08:34.535 17992.611 - 18111.767: 99.1756% ( 4) 00:08:34.535 18111.767 - 18230.924: 99.2148% ( 5) 00:08:34.535 18230.924 - 18350.080: 99.2619% ( 6) 00:08:34.535 18350.080 - 18469.236: 99.3012% ( 5) 00:08:34.535 18469.236 - 18588.393: 99.3405% ( 5) 00:08:34.535 18588.393 - 18707.549: 99.3797% ( 5) 00:08:34.535 18707.549 - 18826.705: 99.4111% ( 4) 00:08:34.535 18826.705 - 18945.862: 99.4425% ( 4) 00:08:34.535 18945.862 - 19065.018: 99.4739% ( 4) 00:08:34.535 19065.018 - 19184.175: 99.4975% ( 3) 00:08:34.535 24903.680 - 25022.836: 99.5132% ( 2) 00:08:34.535 25022.836 - 25141.993: 99.5367% ( 3) 00:08:34.535 25141.993 - 25261.149: 99.5760% ( 5) 00:08:34.535 25261.149 - 25380.305: 99.6074% ( 4) 00:08:34.535 25380.305 - 25499.462: 99.6467% ( 5) 00:08:34.535 25499.462 - 25618.618: 99.6859% ( 5) 00:08:34.535 25618.618 - 25737.775: 99.7252% ( 5) 00:08:34.535 25737.775 - 25856.931: 99.7644% ( 5) 00:08:34.535 25856.931 - 25976.087: 99.8037% ( 5) 00:08:34.535 25976.087 - 26095.244: 99.8351% ( 4) 00:08:34.535 26095.244 - 26214.400: 99.8744% ( 5) 00:08:34.535 26214.400 - 26333.556: 99.9136% ( 5) 00:08:34.535 26333.556 - 26452.713: 99.9529% ( 5) 00:08:34.535 26452.713 - 26571.869: 99.9843% ( 4) 00:08:34.535 26571.869 - 26691.025: 100.0000% ( 2) 00:08:34.535 00:08:34.535 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:34.535 ============================================================================== 00:08:34.535 Range in us Cumulative IO count 00:08:34.535 3842.793 - 3872.582: 0.0157% ( 2) 00:08:34.535 3872.582 - 3902.371: 0.0314% ( 2) 00:08:34.535 3902.371 - 3932.160: 0.0471% ( 2) 00:08:34.535 3932.160 - 3961.949: 0.0707% ( 3) 00:08:34.535 3961.949 - 3991.738: 0.0864% ( 2) 00:08:34.535 3991.738 - 4021.527: 0.1021% ( 2) 00:08:34.535 4021.527 - 4051.316: 0.1256% ( 3) 00:08:34.535 4051.316 - 4081.105: 0.1413% ( 2) 00:08:34.535 4081.105 - 4110.895: 0.1570% ( 2) 00:08:34.535 4110.895 - 4140.684: 0.1727% ( 2) 00:08:34.535 4140.684 - 4170.473: 0.1963% ( 3) 00:08:34.535 4170.473 - 4200.262: 0.2120% ( 2) 00:08:34.535 4200.262 - 4230.051: 0.2277% ( 2) 00:08:34.535 4230.051 - 4259.840: 0.2513% ( 3) 00:08:34.535 4259.840 - 4289.629: 0.2670% ( 2) 00:08:34.535 4289.629 - 4319.418: 0.2905% ( 3) 00:08:34.535 4319.418 - 4349.207: 0.3062% ( 2) 00:08:34.535 4349.207 - 4378.996: 0.3219% ( 2) 00:08:34.535 4378.996 - 4408.785: 0.3376% ( 2) 00:08:34.535 4408.785 - 4438.575: 0.3612% ( 3) 00:08:34.535 4468.364 - 4498.153: 0.3769% ( 2) 00:08:34.535 4498.153 - 4527.942: 0.3847% ( 1) 00:08:34.535 4527.942 - 4557.731: 0.4083% ( 3) 00:08:34.535 4557.731 - 4587.520: 0.4240% ( 2) 00:08:34.535 4587.520 - 4617.309: 0.4318% ( 1) 00:08:34.535 4617.309 - 4647.098: 0.4554% ( 3) 00:08:34.535 4647.098 - 4676.887: 0.4711% ( 2) 00:08:34.535 4676.887 - 4706.676: 0.4868% ( 2) 00:08:34.535 4706.676 - 4736.465: 0.5025% ( 2) 00:08:34.535 6404.655 - 6434.444: 0.5182% ( 2) 00:08:34.535 6434.444 - 6464.233: 0.5339% ( 2) 00:08:34.535 6464.233 - 6494.022: 0.5496% ( 2) 00:08:34.535 6494.022 - 6523.811: 0.5653% ( 2) 00:08:34.535 6523.811 - 6553.600: 0.5889% ( 3) 00:08:34.535 6553.600 - 6583.389: 0.6046% ( 2) 00:08:34.535 6583.389 - 6613.178: 0.6281% ( 3) 00:08:34.535 6613.178 - 6642.967: 0.6360% ( 1) 00:08:34.535 6642.967 - 6672.756: 0.6517% ( 2) 00:08:34.535 6672.756 - 6702.545: 0.6674% ( 2) 00:08:34.535 6702.545 - 6732.335: 0.6910% ( 3) 00:08:34.535 6732.335 - 6762.124: 0.7067% ( 2) 00:08:34.535 6762.124 - 6791.913: 0.7224% ( 2) 00:08:34.535 6791.913 - 6821.702: 0.7381% ( 2) 00:08:34.535 6821.702 - 6851.491: 0.7616% ( 3) 00:08:34.535 6851.491 - 6881.280: 0.7773% ( 2) 00:08:34.535 6881.280 - 6911.069: 0.7930% ( 2) 00:08:34.535 6911.069 - 6940.858: 0.8087% ( 2) 00:08:34.535 6940.858 - 6970.647: 0.8323% ( 3) 00:08:34.535 6970.647 - 7000.436: 0.8401% ( 1) 00:08:34.535 7000.436 - 7030.225: 0.8558% ( 2) 00:08:34.535 7030.225 - 7060.015: 0.8715% ( 2) 00:08:34.535 7060.015 - 7089.804: 0.8872% ( 2) 00:08:34.535 7089.804 - 7119.593: 0.9030% ( 2) 00:08:34.535 7119.593 - 7149.382: 0.9187% ( 2) 00:08:34.535 7149.382 - 7179.171: 0.9422% ( 3) 00:08:34.535 7179.171 - 7208.960: 0.9579% ( 2) 00:08:34.535 7208.960 - 7238.749: 0.9736% ( 2) 00:08:34.535 7238.749 - 7268.538: 0.9893% ( 2) 00:08:34.535 7268.538 - 7298.327: 1.0050% ( 2) 00:08:34.535 7864.320 - 7923.898: 1.0207% ( 2) 00:08:34.535 7923.898 - 7983.476: 1.1228% ( 13) 00:08:34.535 7983.476 - 8043.055: 1.2641% ( 18) 00:08:34.535 8043.055 - 8102.633: 1.7038% ( 56) 00:08:34.535 8102.633 - 8162.211: 2.3555% ( 83) 00:08:34.535 8162.211 - 8221.789: 3.2035% ( 108) 00:08:34.535 8221.789 - 8281.367: 4.2949% ( 139) 00:08:34.535 8281.367 - 8340.945: 5.6140% ( 168) 00:08:34.535 8340.945 - 8400.524: 6.9331% ( 168) 00:08:34.535 8400.524 - 8460.102: 8.4406% ( 192) 00:08:34.535 8460.102 - 8519.680: 10.1288% ( 215) 00:08:34.535 8519.680 - 8579.258: 11.8719% ( 222) 00:08:34.535 8579.258 - 8638.836: 13.7406% ( 238) 00:08:34.535 8638.836 - 8698.415: 15.7349% ( 254) 00:08:34.535 8698.415 - 8757.993: 17.6822% ( 248) 00:08:34.535 8757.993 - 8817.571: 19.7629% ( 265) 00:08:34.535 8817.571 - 8877.149: 21.8986% ( 272) 00:08:34.535 8877.149 - 8936.727: 24.1677% ( 289) 00:08:34.535 8936.727 - 8996.305: 26.6253% ( 313) 00:08:34.535 8996.305 - 9055.884: 29.4048% ( 354) 00:08:34.535 9055.884 - 9115.462: 32.5769% ( 404) 00:08:34.535 9115.462 - 9175.040: 35.6705% ( 394) 00:08:34.535 9175.040 - 9234.618: 38.8191% ( 401) 00:08:34.535 9234.618 - 9294.196: 41.9519% ( 399) 00:08:34.535 9294.196 - 9353.775: 45.0612% ( 396) 00:08:34.535 9353.775 - 9413.353: 47.9350% ( 366) 00:08:34.535 9413.353 - 9472.931: 50.6988% ( 352) 00:08:34.535 9472.931 - 9532.509: 53.2585% ( 326) 00:08:34.535 9532.509 - 9592.087: 55.7239% ( 314) 00:08:34.535 9592.087 - 9651.665: 58.1737% ( 312) 00:08:34.535 9651.665 - 9711.244: 60.3643% ( 279) 00:08:34.535 9711.244 - 9770.822: 62.6256% ( 288) 00:08:34.535 9770.822 - 9830.400: 64.7770% ( 274) 00:08:34.535 9830.400 - 9889.978: 66.8263% ( 261) 00:08:34.535 9889.978 - 9949.556: 68.6793% ( 236) 00:08:34.535 9949.556 - 10009.135: 70.3753% ( 216) 00:08:34.535 10009.135 - 10068.713: 71.8671% ( 190) 00:08:34.535 10068.713 - 10128.291: 73.1313% ( 161) 00:08:34.535 10128.291 - 10187.869: 74.0970% ( 123) 00:08:34.535 10187.869 - 10247.447: 74.8665% ( 98) 00:08:34.535 10247.447 - 10307.025: 75.3847% ( 66) 00:08:34.535 10307.025 - 10366.604: 75.9030% ( 66) 00:08:34.535 10366.604 - 10426.182: 76.3191% ( 53) 00:08:34.535 10426.182 - 10485.760: 76.6960% ( 48) 00:08:34.535 10485.760 - 10545.338: 76.9943% ( 38) 00:08:34.535 10545.338 - 10604.916: 77.3006% ( 39) 00:08:34.535 10604.916 - 10664.495: 77.6146% ( 40) 00:08:34.535 10664.495 - 10724.073: 77.9130% ( 38) 00:08:34.535 10724.073 - 10783.651: 78.1721% ( 33) 00:08:34.535 10783.651 - 10843.229: 78.3920% ( 28) 00:08:34.535 10843.229 - 10902.807: 78.6354% ( 31) 00:08:34.535 10902.807 - 10962.385: 78.8709% ( 30) 00:08:34.535 10962.385 - 11021.964: 79.1693% ( 38) 00:08:34.535 11021.964 - 11081.542: 79.4362% ( 34) 00:08:34.535 11081.542 - 11141.120: 79.7582% ( 41) 00:08:34.535 11141.120 - 11200.698: 80.1036% ( 44) 00:08:34.535 11200.698 - 11260.276: 80.4884% ( 49) 00:08:34.535 11260.276 - 11319.855: 80.9124% ( 54) 00:08:34.535 11319.855 - 11379.433: 81.3599% ( 57) 00:08:34.535 11379.433 - 11439.011: 81.8467% ( 62) 00:08:34.535 11439.011 - 11498.589: 82.3649% ( 66) 00:08:34.535 11498.589 - 11558.167: 82.8596% ( 63) 00:08:34.535 11558.167 - 11617.745: 83.3229% ( 59) 00:08:34.535 11617.745 - 11677.324: 83.7861% ( 59) 00:08:34.535 11677.324 - 11736.902: 84.2023% ( 53) 00:08:34.535 11736.902 - 11796.480: 84.6812% ( 61) 00:08:34.535 11796.480 - 11856.058: 85.1445% ( 59) 00:08:34.535 11856.058 - 11915.636: 85.6313% ( 62) 00:08:34.535 11915.636 - 11975.215: 86.1259% ( 63) 00:08:34.535 11975.215 - 12034.793: 86.6520% ( 67) 00:08:34.535 12034.793 - 12094.371: 87.1624% ( 65) 00:08:34.535 12094.371 - 12153.949: 87.7198% ( 71) 00:08:34.535 12153.949 - 12213.527: 88.2538% ( 68) 00:08:34.535 12213.527 - 12273.105: 88.7720% ( 66) 00:08:34.535 12273.105 - 12332.684: 89.2509% ( 61) 00:08:34.535 12332.684 - 12392.262: 89.8006% ( 70) 00:08:34.535 12392.262 - 12451.840: 90.2560% ( 58) 00:08:34.535 12451.840 - 12511.418: 90.7035% ( 57) 00:08:34.535 12511.418 - 12570.996: 91.0883% ( 49) 00:08:34.535 12570.996 - 12630.575: 91.4808% ( 50) 00:08:34.535 12630.575 - 12690.153: 91.8499% ( 47) 00:08:34.535 12690.153 - 12749.731: 92.1954% ( 44) 00:08:34.535 12749.731 - 12809.309: 92.5801% ( 49) 00:08:34.535 12809.309 - 12868.887: 92.9648% ( 49) 00:08:34.535 12868.887 - 12928.465: 93.3103% ( 44) 00:08:34.535 12928.465 - 12988.044: 93.6950% ( 49) 00:08:34.535 12988.044 - 13047.622: 94.0798% ( 49) 00:08:34.535 13047.622 - 13107.200: 94.4488% ( 47) 00:08:34.535 13107.200 - 13166.778: 94.8021% ( 45) 00:08:34.535 13166.778 - 13226.356: 95.1790% ( 48) 00:08:34.535 13226.356 - 13285.935: 95.5166% ( 43) 00:08:34.535 13285.935 - 13345.513: 95.8857% ( 47) 00:08:34.535 13345.513 - 13405.091: 96.2233% ( 43) 00:08:34.535 13405.091 - 13464.669: 96.5374% ( 40) 00:08:34.535 13464.669 - 13524.247: 96.7729% ( 30) 00:08:34.535 13524.247 - 13583.825: 96.9849% ( 27) 00:08:34.535 13583.825 - 13643.404: 97.2126% ( 29) 00:08:34.535 13643.404 - 13702.982: 97.3618% ( 19) 00:08:34.535 13702.982 - 13762.560: 97.4639% ( 13) 00:08:34.535 13762.560 - 13822.138: 97.5660% ( 13) 00:08:34.535 13822.138 - 13881.716: 97.6602% ( 12) 00:08:34.535 13881.716 - 13941.295: 97.7544% ( 12) 00:08:34.535 13941.295 - 14000.873: 97.8251% ( 9) 00:08:34.535 14000.873 - 14060.451: 97.9036% ( 10) 00:08:34.535 14060.451 - 14120.029: 97.9428% ( 5) 00:08:34.535 14120.029 - 14179.607: 97.9585% ( 2) 00:08:34.535 14179.607 - 14239.185: 97.9821% ( 3) 00:08:34.535 14239.185 - 14298.764: 97.9899% ( 1) 00:08:34.535 15132.858 - 15192.436: 98.0057% ( 2) 00:08:34.535 15192.436 - 15252.015: 98.0292% ( 3) 00:08:34.535 15252.015 - 15371.171: 98.0763% ( 6) 00:08:34.535 15371.171 - 15490.327: 98.1391% ( 8) 00:08:34.535 15490.327 - 15609.484: 98.2412% ( 13) 00:08:34.535 15609.484 - 15728.640: 98.3433% ( 13) 00:08:34.535 15728.640 - 15847.796: 98.4532% ( 14) 00:08:34.535 15847.796 - 15966.953: 98.5553% ( 13) 00:08:34.535 15966.953 - 16086.109: 98.6731% ( 15) 00:08:34.535 16086.109 - 16205.265: 98.7830% ( 14) 00:08:34.535 16205.265 - 16324.422: 98.8772% ( 12) 00:08:34.535 16324.422 - 16443.578: 98.9243% ( 6) 00:08:34.535 16443.578 - 16562.735: 98.9793% ( 7) 00:08:34.535 16562.735 - 16681.891: 98.9950% ( 2) 00:08:34.535 16920.204 - 17039.360: 99.0107% ( 2) 00:08:34.535 17039.360 - 17158.516: 99.0421% ( 4) 00:08:34.535 17158.516 - 17277.673: 99.0813% ( 5) 00:08:34.535 17277.673 - 17396.829: 99.1206% ( 5) 00:08:34.535 17396.829 - 17515.985: 99.1599% ( 5) 00:08:34.535 17515.985 - 17635.142: 99.1991% ( 5) 00:08:34.535 17635.142 - 17754.298: 99.2384% ( 5) 00:08:34.535 17754.298 - 17873.455: 99.2776% ( 5) 00:08:34.535 17873.455 - 17992.611: 99.3090% ( 4) 00:08:34.535 17992.611 - 18111.767: 99.3405% ( 4) 00:08:34.535 18111.767 - 18230.924: 99.3797% ( 5) 00:08:34.535 18230.924 - 18350.080: 99.4111% ( 4) 00:08:34.535 18350.080 - 18469.236: 99.4504% ( 5) 00:08:34.535 18469.236 - 18588.393: 99.4896% ( 5) 00:08:34.535 18588.393 - 18707.549: 99.4975% ( 1) 00:08:34.535 24307.898 - 24427.055: 99.5132% ( 2) 00:08:34.535 24427.055 - 24546.211: 99.5446% ( 4) 00:08:34.535 24546.211 - 24665.367: 99.5760% ( 4) 00:08:34.535 24665.367 - 24784.524: 99.6231% ( 6) 00:08:34.535 24784.524 - 24903.680: 99.6624% ( 5) 00:08:34.535 24903.680 - 25022.836: 99.6938% ( 4) 00:08:34.535 25022.836 - 25141.993: 99.7330% ( 5) 00:08:34.535 25141.993 - 25261.149: 99.7723% ( 5) 00:08:34.535 25261.149 - 25380.305: 99.8116% ( 5) 00:08:34.535 25380.305 - 25499.462: 99.8508% ( 5) 00:08:34.535 25499.462 - 25618.618: 99.8901% ( 5) 00:08:34.535 25618.618 - 25737.775: 99.9136% ( 3) 00:08:34.535 25737.775 - 25856.931: 99.9529% ( 5) 00:08:34.535 25856.931 - 25976.087: 99.9843% ( 4) 00:08:34.535 25976.087 - 26095.244: 100.0000% ( 2) 00:08:34.535 00:08:34.535 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:34.535 ============================================================================== 00:08:34.535 Range in us Cumulative IO count 00:08:34.535 3515.113 - 3530.007: 0.0079% ( 1) 00:08:34.535 3530.007 - 3544.902: 0.0157% ( 1) 00:08:34.535 3544.902 - 3559.796: 0.0236% ( 1) 00:08:34.535 3559.796 - 3574.691: 0.0314% ( 1) 00:08:34.535 3574.691 - 3589.585: 0.0393% ( 1) 00:08:34.535 3589.585 - 3604.480: 0.0471% ( 1) 00:08:34.535 3604.480 - 3619.375: 0.0628% ( 2) 00:08:34.535 3619.375 - 3634.269: 0.0707% ( 1) 00:08:34.535 3634.269 - 3649.164: 0.0785% ( 1) 00:08:34.535 3649.164 - 3664.058: 0.0864% ( 1) 00:08:34.535 3664.058 - 3678.953: 0.0942% ( 1) 00:08:34.535 3678.953 - 3693.847: 0.1021% ( 1) 00:08:34.535 3693.847 - 3708.742: 0.1178% ( 2) 00:08:34.535 3708.742 - 3723.636: 0.1256% ( 1) 00:08:34.535 3723.636 - 3738.531: 0.1335% ( 1) 00:08:34.535 3738.531 - 3753.425: 0.1413% ( 1) 00:08:34.535 3753.425 - 3768.320: 0.1492% ( 1) 00:08:34.535 3768.320 - 3783.215: 0.1649% ( 2) 00:08:34.535 3798.109 - 3813.004: 0.1727% ( 1) 00:08:34.535 3813.004 - 3842.793: 0.1884% ( 2) 00:08:34.535 3842.793 - 3872.582: 0.2120% ( 3) 00:08:34.535 3872.582 - 3902.371: 0.2277% ( 2) 00:08:34.535 3902.371 - 3932.160: 0.2434% ( 2) 00:08:34.535 3932.160 - 3961.949: 0.2670% ( 3) 00:08:34.535 3961.949 - 3991.738: 0.2827% ( 2) 00:08:34.535 3991.738 - 4021.527: 0.3062% ( 3) 00:08:34.535 4021.527 - 4051.316: 0.3219% ( 2) 00:08:34.535 4051.316 - 4081.105: 0.3376% ( 2) 00:08:34.535 4081.105 - 4110.895: 0.3455% ( 1) 00:08:34.535 4110.895 - 4140.684: 0.3612% ( 2) 00:08:34.535 4140.684 - 4170.473: 0.3847% ( 3) 00:08:34.536 4170.473 - 4200.262: 0.4004% ( 2) 00:08:34.536 4200.262 - 4230.051: 0.4240% ( 3) 00:08:34.536 4230.051 - 4259.840: 0.4397% ( 2) 00:08:34.536 4259.840 - 4289.629: 0.4554% ( 2) 00:08:34.536 4289.629 - 4319.418: 0.4711% ( 2) 00:08:34.536 4319.418 - 4349.207: 0.4947% ( 3) 00:08:34.536 4349.207 - 4378.996: 0.5025% ( 1) 00:08:34.536 6106.764 - 6136.553: 0.5104% ( 1) 00:08:34.536 6136.553 - 6166.342: 0.5261% ( 2) 00:08:34.536 6166.342 - 6196.131: 0.5418% ( 2) 00:08:34.536 6196.131 - 6225.920: 0.5575% ( 2) 00:08:34.536 6225.920 - 6255.709: 0.5810% ( 3) 00:08:34.536 6255.709 - 6285.498: 0.6046% ( 3) 00:08:34.536 6285.498 - 6315.287: 0.6203% ( 2) 00:08:34.536 6315.287 - 6345.076: 0.6360% ( 2) 00:08:34.536 6345.076 - 6374.865: 0.6517% ( 2) 00:08:34.536 6374.865 - 6404.655: 0.6674% ( 2) 00:08:34.536 6404.655 - 6434.444: 0.6910% ( 3) 00:08:34.536 6434.444 - 6464.233: 0.6988% ( 1) 00:08:34.536 6464.233 - 6494.022: 0.7224% ( 3) 00:08:34.536 6494.022 - 6523.811: 0.7381% ( 2) 00:08:34.536 6523.811 - 6553.600: 0.7538% ( 2) 00:08:34.536 6553.600 - 6583.389: 0.7773% ( 3) 00:08:34.536 6583.389 - 6613.178: 0.7930% ( 2) 00:08:34.536 6613.178 - 6642.967: 0.8087% ( 2) 00:08:34.536 6642.967 - 6672.756: 0.8323% ( 3) 00:08:34.536 6672.756 - 6702.545: 0.8480% ( 2) 00:08:34.536 6702.545 - 6732.335: 0.8637% ( 2) 00:08:34.536 6732.335 - 6762.124: 0.8872% ( 3) 00:08:34.536 6762.124 - 6791.913: 0.9030% ( 2) 00:08:34.536 6791.913 - 6821.702: 0.9187% ( 2) 00:08:34.536 6821.702 - 6851.491: 0.9422% ( 3) 00:08:34.536 6851.491 - 6881.280: 0.9579% ( 2) 00:08:34.536 6881.280 - 6911.069: 0.9736% ( 2) 00:08:34.536 6911.069 - 6940.858: 0.9972% ( 3) 00:08:34.536 6940.858 - 6970.647: 1.0050% ( 1) 00:08:34.536 7923.898 - 7983.476: 1.0364% ( 4) 00:08:34.536 7983.476 - 8043.055: 1.2327% ( 25) 00:08:34.536 8043.055 - 8102.633: 1.5625% ( 42) 00:08:34.536 8102.633 - 8162.211: 2.1828% ( 79) 00:08:34.536 8162.211 - 8221.789: 3.0779% ( 114) 00:08:34.536 8221.789 - 8281.367: 4.2085% ( 144) 00:08:34.536 8281.367 - 8340.945: 5.4805% ( 162) 00:08:34.536 8340.945 - 8400.524: 6.9331% ( 185) 00:08:34.536 8400.524 - 8460.102: 8.5113% ( 201) 00:08:34.536 8460.102 - 8519.680: 10.1445% ( 208) 00:08:34.536 8519.680 - 8579.258: 11.9425% ( 229) 00:08:34.536 8579.258 - 8638.836: 13.7955% ( 236) 00:08:34.536 8638.836 - 8698.415: 15.8606% ( 263) 00:08:34.536 8698.415 - 8757.993: 17.8863% ( 258) 00:08:34.536 8757.993 - 8817.571: 19.9906% ( 268) 00:08:34.536 8817.571 - 8877.149: 22.0713% ( 265) 00:08:34.536 8877.149 - 8936.727: 24.3090% ( 285) 00:08:34.536 8936.727 - 8996.305: 26.8609% ( 325) 00:08:34.536 8996.305 - 9055.884: 29.8288% ( 378) 00:08:34.536 9055.884 - 9115.462: 32.9460% ( 397) 00:08:34.536 9115.462 - 9175.040: 36.3379% ( 432) 00:08:34.536 9175.040 - 9234.618: 39.5101% ( 404) 00:08:34.536 9234.618 - 9294.196: 42.6036% ( 394) 00:08:34.536 9294.196 - 9353.775: 45.4303% ( 360) 00:08:34.536 9353.775 - 9413.353: 48.2491% ( 359) 00:08:34.536 9413.353 - 9472.931: 50.9893% ( 349) 00:08:34.536 9472.931 - 9532.509: 53.6903% ( 344) 00:08:34.536 9532.509 - 9592.087: 56.1950% ( 319) 00:08:34.536 9592.087 - 9651.665: 58.5584% ( 301) 00:08:34.536 9651.665 - 9711.244: 60.8590% ( 293) 00:08:34.536 9711.244 - 9770.822: 62.9868% ( 271) 00:08:34.536 9770.822 - 9830.400: 65.1460% ( 275) 00:08:34.536 9830.400 - 9889.978: 67.1247% ( 252) 00:08:34.536 9889.978 - 9949.556: 68.8913% ( 225) 00:08:34.536 9949.556 - 10009.135: 70.4538% ( 199) 00:08:34.536 10009.135 - 10068.713: 71.9378% ( 189) 00:08:34.536 10068.713 - 10128.291: 73.1156% ( 150) 00:08:34.536 10128.291 - 10187.869: 74.1128% ( 127) 00:08:34.536 10187.869 - 10247.447: 74.8508% ( 94) 00:08:34.536 10247.447 - 10307.025: 75.4240% ( 73) 00:08:34.536 10307.025 - 10366.604: 75.8401% ( 53) 00:08:34.536 10366.604 - 10426.182: 76.2092% ( 47) 00:08:34.536 10426.182 - 10485.760: 76.5389% ( 42) 00:08:34.536 10485.760 - 10545.338: 76.8609% ( 41) 00:08:34.536 10545.338 - 10604.916: 77.1435% ( 36) 00:08:34.536 10604.916 - 10664.495: 77.3712% ( 29) 00:08:34.536 10664.495 - 10724.073: 77.6146% ( 31) 00:08:34.536 10724.073 - 10783.651: 77.8580% ( 31) 00:08:34.536 10783.651 - 10843.229: 78.1486% ( 37) 00:08:34.536 10843.229 - 10902.807: 78.3920% ( 31) 00:08:34.536 10902.807 - 10962.385: 78.6275% ( 30) 00:08:34.536 10962.385 - 11021.964: 78.8945% ( 34) 00:08:34.536 11021.964 - 11081.542: 79.1928% ( 38) 00:08:34.536 11081.542 - 11141.120: 79.5697% ( 48) 00:08:34.536 11141.120 - 11200.698: 80.0173% ( 57) 00:08:34.536 11200.698 - 11260.276: 80.4413% ( 54) 00:08:34.536 11260.276 - 11319.855: 80.8653% ( 54) 00:08:34.536 11319.855 - 11379.433: 81.2971% ( 55) 00:08:34.536 11379.433 - 11439.011: 81.7682% ( 60) 00:08:34.536 11439.011 - 11498.589: 82.2786% ( 65) 00:08:34.536 11498.589 - 11558.167: 82.7575% ( 61) 00:08:34.536 11558.167 - 11617.745: 83.2443% ( 62) 00:08:34.536 11617.745 - 11677.324: 83.7390% ( 63) 00:08:34.536 11677.324 - 11736.902: 84.2729% ( 68) 00:08:34.536 11736.902 - 11796.480: 84.7754% ( 64) 00:08:34.536 11796.480 - 11856.058: 85.3329% ( 71) 00:08:34.536 11856.058 - 11915.636: 85.8276% ( 63) 00:08:34.536 11915.636 - 11975.215: 86.4086% ( 74) 00:08:34.536 11975.215 - 12034.793: 86.8876% ( 61) 00:08:34.536 12034.793 - 12094.371: 87.4529% ( 72) 00:08:34.536 12094.371 - 12153.949: 88.0261% ( 73) 00:08:34.536 12153.949 - 12213.527: 88.5914% ( 72) 00:08:34.536 12213.527 - 12273.105: 89.0704% ( 61) 00:08:34.536 12273.105 - 12332.684: 89.5729% ( 64) 00:08:34.536 12332.684 - 12392.262: 90.0518% ( 61) 00:08:34.536 12392.262 - 12451.840: 90.5072% ( 58) 00:08:34.536 12451.840 - 12511.418: 90.8920% ( 49) 00:08:34.536 12511.418 - 12570.996: 91.3081% ( 53) 00:08:34.536 12570.996 - 12630.575: 91.6850% ( 48) 00:08:34.536 12630.575 - 12690.153: 92.0540% ( 47) 00:08:34.536 12690.153 - 12749.731: 92.4309% ( 48) 00:08:34.536 12749.731 - 12809.309: 92.7764% ( 44) 00:08:34.536 12809.309 - 12868.887: 93.1454% ( 47) 00:08:34.536 12868.887 - 12928.465: 93.4830% ( 43) 00:08:34.536 12928.465 - 12988.044: 93.8442% ( 46) 00:08:34.536 12988.044 - 13047.622: 94.1818% ( 43) 00:08:34.536 13047.622 - 13107.200: 94.5273% ( 44) 00:08:34.536 13107.200 - 13166.778: 94.8885% ( 46) 00:08:34.536 13166.778 - 13226.356: 95.2497% ( 46) 00:08:34.536 13226.356 - 13285.935: 95.6030% ( 45) 00:08:34.536 13285.935 - 13345.513: 95.9328% ( 42) 00:08:34.536 13345.513 - 13405.091: 96.2312% ( 38) 00:08:34.536 13405.091 - 13464.669: 96.4746% ( 31) 00:08:34.536 13464.669 - 13524.247: 96.7258% ( 32) 00:08:34.536 13524.247 - 13583.825: 96.9457% ( 28) 00:08:34.536 13583.825 - 13643.404: 97.1341% ( 24) 00:08:34.536 13643.404 - 13702.982: 97.2833% ( 19) 00:08:34.536 13702.982 - 13762.560: 97.3854% ( 13) 00:08:34.536 13762.560 - 13822.138: 97.4874% ( 13) 00:08:34.536 13822.138 - 13881.716: 97.5817% ( 12) 00:08:34.536 13881.716 - 13941.295: 97.6523% ( 9) 00:08:34.536 13941.295 - 14000.873: 97.7465% ( 12) 00:08:34.536 14000.873 - 14060.451: 97.7937% ( 6) 00:08:34.536 14060.451 - 14120.029: 97.8408% ( 6) 00:08:34.536 14120.029 - 14179.607: 97.8722% ( 4) 00:08:34.536 14179.607 - 14239.185: 97.8957% ( 3) 00:08:34.536 14239.185 - 14298.764: 97.9193% ( 3) 00:08:34.536 14298.764 - 14358.342: 97.9428% ( 3) 00:08:34.536 14358.342 - 14417.920: 97.9664% ( 3) 00:08:34.536 14417.920 - 14477.498: 97.9821% ( 2) 00:08:34.536 14477.498 - 14537.076: 97.9899% ( 1) 00:08:34.536 15013.702 - 15073.280: 98.0135% ( 3) 00:08:34.536 15073.280 - 15132.858: 98.0371% ( 3) 00:08:34.536 15132.858 - 15192.436: 98.0685% ( 4) 00:08:34.536 15192.436 - 15252.015: 98.0920% ( 3) 00:08:34.536 15252.015 - 15371.171: 98.1470% ( 7) 00:08:34.536 15371.171 - 15490.327: 98.2019% ( 7) 00:08:34.536 15490.327 - 15609.484: 98.2569% ( 7) 00:08:34.536 15609.484 - 15728.640: 98.3119% ( 7) 00:08:34.536 15728.640 - 15847.796: 98.3982% ( 11) 00:08:34.536 15847.796 - 15966.953: 98.5082% ( 14) 00:08:34.536 15966.953 - 16086.109: 98.6181% ( 14) 00:08:34.536 16086.109 - 16205.265: 98.6888% ( 9) 00:08:34.536 16205.265 - 16324.422: 98.7437% ( 7) 00:08:34.536 16324.422 - 16443.578: 98.8144% ( 9) 00:08:34.536 16443.578 - 16562.735: 98.9008% ( 11) 00:08:34.536 16562.735 - 16681.891: 98.9793% ( 10) 00:08:34.536 16681.891 - 16801.047: 99.0656% ( 11) 00:08:34.536 16801.047 - 16920.204: 99.1520% ( 11) 00:08:34.536 16920.204 - 17039.360: 99.1991% ( 6) 00:08:34.536 17039.360 - 17158.516: 99.2384% ( 5) 00:08:34.536 17158.516 - 17277.673: 99.2776% ( 5) 00:08:34.536 17277.673 - 17396.829: 99.3090% ( 4) 00:08:34.536 17396.829 - 17515.985: 99.3405% ( 4) 00:08:34.536 17515.985 - 17635.142: 99.3797% ( 5) 00:08:34.536 17635.142 - 17754.298: 99.4111% ( 4) 00:08:34.536 17754.298 - 17873.455: 99.4504% ( 5) 00:08:34.536 17873.455 - 17992.611: 99.4818% ( 4) 00:08:34.536 17992.611 - 18111.767: 99.4975% ( 2) 00:08:34.536 23712.116 - 23831.273: 99.5132% ( 2) 00:08:34.536 23831.273 - 23950.429: 99.5367% ( 3) 00:08:34.536 23950.429 - 24069.585: 99.5760% ( 5) 00:08:34.536 24069.585 - 24188.742: 99.6153% ( 5) 00:08:34.536 24188.742 - 24307.898: 99.6545% ( 5) 00:08:34.536 24307.898 - 24427.055: 99.6859% ( 4) 00:08:34.536 24427.055 - 24546.211: 99.7173% ( 4) 00:08:34.536 24546.211 - 24665.367: 99.7566% ( 5) 00:08:34.536 24665.367 - 24784.524: 99.7959% ( 5) 00:08:34.536 24784.524 - 24903.680: 99.8351% ( 5) 00:08:34.536 24903.680 - 25022.836: 99.8744% ( 5) 00:08:34.536 25022.836 - 25141.993: 99.9058% ( 4) 00:08:34.536 25141.993 - 25261.149: 99.9450% ( 5) 00:08:34.536 25261.149 - 25380.305: 99.9843% ( 5) 00:08:34.536 25380.305 - 25499.462: 100.0000% ( 2) 00:08:34.536 00:08:34.536 05:55:57 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:35.916 Initializing NVMe Controllers 00:08:35.916 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.916 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.916 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.916 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.916 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:35.916 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:35.916 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:35.916 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:35.916 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:35.916 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:35.916 Initialization complete. Launching workers. 00:08:35.916 ======================================================== 00:08:35.916 Latency(us) 00:08:35.916 Device Information : IOPS MiB/s Average min max 00:08:35.916 PCIE (0000:00:10.0) NSID 1 from core 0: 12463.71 146.06 10273.43 6295.78 35210.58 00:08:35.916 PCIE (0000:00:11.0) NSID 1 from core 0: 12463.71 146.06 10262.01 6048.23 34117.65 00:08:35.916 PCIE (0000:00:13.0) NSID 1 from core 0: 12463.71 146.06 10248.39 5321.78 34271.09 00:08:35.916 PCIE (0000:00:12.0) NSID 1 from core 0: 12463.71 146.06 10234.77 4954.41 33276.47 00:08:35.916 PCIE (0000:00:12.0) NSID 2 from core 0: 12463.71 146.06 10220.89 4522.04 32514.70 00:08:35.916 PCIE (0000:00:12.0) NSID 3 from core 0: 12463.71 146.06 10207.16 4237.05 31681.50 00:08:35.916 ======================================================== 00:08:35.916 Total : 74782.26 876.35 10241.11 4237.05 35210.58 00:08:35.916 00:08:35.916 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:35.916 ================================================================================= 00:08:35.916 1.00000% : 8340.945us 00:08:35.916 10.00000% : 8877.149us 00:08:35.916 25.00000% : 9472.931us 00:08:35.916 50.00000% : 10009.135us 00:08:35.916 75.00000% : 10604.916us 00:08:35.916 90.00000% : 11617.745us 00:08:35.916 95.00000% : 12213.527us 00:08:35.916 98.00000% : 13285.935us 00:08:35.916 99.00000% : 23592.960us 00:08:35.916 99.50000% : 33602.095us 00:08:35.916 99.90000% : 35031.971us 00:08:35.916 99.99000% : 35270.284us 00:08:35.916 99.99900% : 35270.284us 00:08:35.916 99.99990% : 35270.284us 00:08:35.916 99.99999% : 35270.284us 00:08:35.916 00:08:35.916 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:35.916 ================================================================================= 00:08:35.916 1.00000% : 8460.102us 00:08:35.916 10.00000% : 8936.727us 00:08:35.916 25.00000% : 9472.931us 00:08:35.916 50.00000% : 10009.135us 00:08:35.916 75.00000% : 10545.338us 00:08:35.916 90.00000% : 11677.324us 00:08:35.916 95.00000% : 12094.371us 00:08:35.916 98.00000% : 13166.778us 00:08:35.916 99.00000% : 24188.742us 00:08:35.916 99.50000% : 32887.156us 00:08:35.916 99.90000% : 34078.720us 00:08:35.916 99.99000% : 34317.033us 00:08:35.916 99.99900% : 34317.033us 00:08:35.916 99.99990% : 34317.033us 00:08:35.916 99.99999% : 34317.033us 00:08:35.916 00:08:35.916 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:35.916 ================================================================================= 00:08:35.916 1.00000% : 8340.945us 00:08:35.916 10.00000% : 8936.727us 00:08:35.916 25.00000% : 9413.353us 00:08:35.916 50.00000% : 10009.135us 00:08:35.916 75.00000% : 10545.338us 00:08:35.916 90.00000% : 11617.745us 00:08:35.916 95.00000% : 12153.949us 00:08:35.916 98.00000% : 12928.465us 00:08:35.916 99.00000% : 24307.898us 00:08:35.916 99.50000% : 32887.156us 00:08:35.916 99.90000% : 34078.720us 00:08:35.916 99.99000% : 34317.033us 00:08:35.916 99.99900% : 34317.033us 00:08:35.916 99.99990% : 34317.033us 00:08:35.916 99.99999% : 34317.033us 00:08:35.916 00:08:35.916 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:35.916 ================================================================================= 00:08:35.916 1.00000% : 8221.789us 00:08:35.916 10.00000% : 8936.727us 00:08:35.916 25.00000% : 9413.353us 00:08:35.916 50.00000% : 10009.135us 00:08:35.916 75.00000% : 10545.338us 00:08:35.916 90.00000% : 11617.745us 00:08:35.916 95.00000% : 12094.371us 00:08:35.916 98.00000% : 12690.153us 00:08:35.916 99.00000% : 23950.429us 00:08:35.916 99.50000% : 31933.905us 00:08:35.916 99.90000% : 33125.469us 00:08:35.916 99.99000% : 33363.782us 00:08:35.916 99.99900% : 33363.782us 00:08:35.916 99.99990% : 33363.782us 00:08:35.916 99.99999% : 33363.782us 00:08:35.916 00:08:35.916 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:35.916 ================================================================================= 00:08:35.916 1.00000% : 7864.320us 00:08:35.916 10.00000% : 8936.727us 00:08:35.916 25.00000% : 9413.353us 00:08:35.916 50.00000% : 10009.135us 00:08:35.916 75.00000% : 10545.338us 00:08:35.916 90.00000% : 11617.745us 00:08:35.916 95.00000% : 12094.371us 00:08:35.916 98.00000% : 12630.575us 00:08:35.916 99.00000% : 23592.960us 00:08:35.916 99.50000% : 31218.967us 00:08:35.916 99.90000% : 32410.531us 00:08:35.916 99.99000% : 32648.844us 00:08:35.916 99.99900% : 32648.844us 00:08:35.916 99.99990% : 32648.844us 00:08:35.916 99.99999% : 32648.844us 00:08:35.916 00:08:35.917 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:35.917 ================================================================================= 00:08:35.917 1.00000% : 7506.851us 00:08:35.917 10.00000% : 8936.727us 00:08:35.917 25.00000% : 9413.353us 00:08:35.917 50.00000% : 10009.135us 00:08:35.917 75.00000% : 10545.338us 00:08:35.917 90.00000% : 11617.745us 00:08:35.917 95.00000% : 12034.793us 00:08:35.917 98.00000% : 12570.996us 00:08:35.917 99.00000% : 23473.804us 00:08:35.917 99.50000% : 30384.873us 00:08:35.917 99.90000% : 31457.280us 00:08:35.917 99.99000% : 31695.593us 00:08:35.917 99.99900% : 31695.593us 00:08:35.917 99.99990% : 31695.593us 00:08:35.917 99.99999% : 31695.593us 00:08:35.917 00:08:35.917 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:35.917 ============================================================================== 00:08:35.917 Range in us Cumulative IO count 00:08:35.917 6285.498 - 6315.287: 0.0160% ( 2) 00:08:35.917 6315.287 - 6345.076: 0.0481% ( 4) 00:08:35.917 6345.076 - 6374.865: 0.0721% ( 3) 00:08:35.917 6374.865 - 6404.655: 0.1202% ( 6) 00:08:35.917 6404.655 - 6434.444: 0.1362% ( 2) 00:08:35.917 6434.444 - 6464.233: 0.1442% ( 1) 00:08:35.917 6464.233 - 6494.022: 0.1522% ( 1) 00:08:35.917 6523.811 - 6553.600: 0.1683% ( 2) 00:08:35.917 6553.600 - 6583.389: 0.1843% ( 2) 00:08:35.917 6583.389 - 6613.178: 0.2003% ( 2) 00:08:35.917 6613.178 - 6642.967: 0.2163% ( 2) 00:08:35.917 6642.967 - 6672.756: 0.2244% ( 1) 00:08:35.917 6672.756 - 6702.545: 0.2404% ( 2) 00:08:35.917 6702.545 - 6732.335: 0.2564% ( 2) 00:08:35.917 6732.335 - 6762.124: 0.2644% ( 1) 00:08:35.917 6762.124 - 6791.913: 0.2804% ( 2) 00:08:35.917 6791.913 - 6821.702: 0.2965% ( 2) 00:08:35.917 6821.702 - 6851.491: 0.3125% ( 2) 00:08:35.917 6851.491 - 6881.280: 0.3285% ( 2) 00:08:35.917 6881.280 - 6911.069: 0.3446% ( 2) 00:08:35.917 6911.069 - 6940.858: 0.3606% ( 2) 00:08:35.917 6940.858 - 6970.647: 0.3766% ( 2) 00:08:35.917 6970.647 - 7000.436: 0.3846% ( 1) 00:08:35.917 7000.436 - 7030.225: 0.4087% ( 3) 00:08:35.917 7030.225 - 7060.015: 0.4247% ( 2) 00:08:35.917 7060.015 - 7089.804: 0.4407% ( 2) 00:08:35.917 7089.804 - 7119.593: 0.4487% ( 1) 00:08:35.917 7119.593 - 7149.382: 0.4647% ( 2) 00:08:35.917 7149.382 - 7179.171: 0.4808% ( 2) 00:08:35.917 7179.171 - 7208.960: 0.4968% ( 2) 00:08:35.917 7208.960 - 7238.749: 0.5048% ( 1) 00:08:35.917 7238.749 - 7268.538: 0.5128% ( 1) 00:08:35.917 8102.633 - 8162.211: 0.6651% ( 19) 00:08:35.917 8162.211 - 8221.789: 0.8253% ( 20) 00:08:35.917 8221.789 - 8281.367: 0.8734% ( 6) 00:08:35.917 8281.367 - 8340.945: 1.0497% ( 22) 00:08:35.917 8340.945 - 8400.524: 1.6026% ( 69) 00:08:35.917 8400.524 - 8460.102: 2.0513% ( 56) 00:08:35.917 8460.102 - 8519.680: 2.9167% ( 108) 00:08:35.917 8519.680 - 8579.258: 4.4071% ( 186) 00:08:35.917 8579.258 - 8638.836: 5.5769% ( 146) 00:08:35.917 8638.836 - 8698.415: 6.6987% ( 140) 00:08:35.917 8698.415 - 8757.993: 7.9087% ( 151) 00:08:35.917 8757.993 - 8817.571: 9.1186% ( 151) 00:08:35.917 8817.571 - 8877.149: 10.6010% ( 185) 00:08:35.917 8877.149 - 8936.727: 12.0513% ( 181) 00:08:35.917 8936.727 - 8996.305: 13.3413% ( 161) 00:08:35.917 8996.305 - 9055.884: 14.6795% ( 167) 00:08:35.917 9055.884 - 9115.462: 15.9615% ( 160) 00:08:35.917 9115.462 - 9175.040: 17.4279% ( 183) 00:08:35.917 9175.040 - 9234.618: 19.1827% ( 219) 00:08:35.917 9234.618 - 9294.196: 21.1138% ( 241) 00:08:35.917 9294.196 - 9353.775: 22.9728% ( 232) 00:08:35.917 9353.775 - 9413.353: 24.9679% ( 249) 00:08:35.917 9413.353 - 9472.931: 27.2837% ( 289) 00:08:35.917 9472.931 - 9532.509: 30.1122% ( 353) 00:08:35.917 9532.509 - 9592.087: 32.7644% ( 331) 00:08:35.917 9592.087 - 9651.665: 35.5609% ( 349) 00:08:35.917 9651.665 - 9711.244: 38.3013% ( 342) 00:08:35.917 9711.244 - 9770.822: 40.9375% ( 329) 00:08:35.917 9770.822 - 9830.400: 43.9343% ( 374) 00:08:35.917 9830.400 - 9889.978: 46.8109% ( 359) 00:08:35.917 9889.978 - 9949.556: 49.5513% ( 342) 00:08:35.917 9949.556 - 10009.135: 52.2756% ( 340) 00:08:35.917 10009.135 - 10068.713: 55.0321% ( 344) 00:08:35.917 10068.713 - 10128.291: 57.5240% ( 311) 00:08:35.917 10128.291 - 10187.869: 60.0160% ( 311) 00:08:35.917 10187.869 - 10247.447: 62.3317% ( 289) 00:08:35.917 10247.447 - 10307.025: 64.7596% ( 303) 00:08:35.917 10307.025 - 10366.604: 66.9712% ( 276) 00:08:35.917 10366.604 - 10426.182: 69.2548% ( 285) 00:08:35.917 10426.182 - 10485.760: 71.3862% ( 266) 00:08:35.917 10485.760 - 10545.338: 73.2452% ( 232) 00:08:35.917 10545.338 - 10604.916: 75.0321% ( 223) 00:08:35.917 10604.916 - 10664.495: 76.7708% ( 217) 00:08:35.917 10664.495 - 10724.073: 78.1651% ( 174) 00:08:35.917 10724.073 - 10783.651: 79.6154% ( 181) 00:08:35.917 10783.651 - 10843.229: 80.9215% ( 163) 00:08:35.917 10843.229 - 10902.807: 82.1314% ( 151) 00:08:35.917 10902.807 - 10962.385: 83.1490% ( 127) 00:08:35.917 10962.385 - 11021.964: 83.9583% ( 101) 00:08:35.917 11021.964 - 11081.542: 84.6875% ( 91) 00:08:35.917 11081.542 - 11141.120: 85.4006% ( 89) 00:08:35.917 11141.120 - 11200.698: 86.0176% ( 77) 00:08:35.917 11200.698 - 11260.276: 86.5545% ( 67) 00:08:35.917 11260.276 - 11319.855: 87.2596% ( 88) 00:08:35.917 11319.855 - 11379.433: 87.8606% ( 75) 00:08:35.917 11379.433 - 11439.011: 88.4936% ( 79) 00:08:35.917 11439.011 - 11498.589: 89.0625% ( 71) 00:08:35.917 11498.589 - 11558.167: 89.6234% ( 70) 00:08:35.917 11558.167 - 11617.745: 90.2163% ( 74) 00:08:35.917 11617.745 - 11677.324: 90.7532% ( 67) 00:08:35.917 11677.324 - 11736.902: 91.2099% ( 57) 00:08:35.917 11736.902 - 11796.480: 91.7468% ( 67) 00:08:35.917 11796.480 - 11856.058: 92.2035% ( 57) 00:08:35.917 11856.058 - 11915.636: 92.7244% ( 65) 00:08:35.917 11915.636 - 11975.215: 93.2452% ( 65) 00:08:35.917 11975.215 - 12034.793: 93.7901% ( 68) 00:08:35.917 12034.793 - 12094.371: 94.2628% ( 59) 00:08:35.917 12094.371 - 12153.949: 94.7596% ( 62) 00:08:35.917 12153.949 - 12213.527: 95.1603% ( 50) 00:08:35.917 12213.527 - 12273.105: 95.5208% ( 45) 00:08:35.917 12273.105 - 12332.684: 95.9375% ( 52) 00:08:35.917 12332.684 - 12392.262: 96.2660% ( 41) 00:08:35.917 12392.262 - 12451.840: 96.4824% ( 27) 00:08:35.917 12451.840 - 12511.418: 96.7949% ( 39) 00:08:35.917 12511.418 - 12570.996: 97.0192% ( 28) 00:08:35.917 12570.996 - 12630.575: 97.2436% ( 28) 00:08:35.917 12630.575 - 12690.153: 97.4199% ( 22) 00:08:35.917 12690.153 - 12749.731: 97.5801% ( 20) 00:08:35.917 12749.731 - 12809.309: 97.7003% ( 15) 00:08:35.917 12809.309 - 12868.887: 97.7644% ( 8) 00:08:35.917 12868.887 - 12928.465: 97.8045% ( 5) 00:08:35.917 12928.465 - 12988.044: 97.8365% ( 4) 00:08:35.917 12988.044 - 13047.622: 97.8686% ( 4) 00:08:35.917 13047.622 - 13107.200: 97.8766% ( 1) 00:08:35.917 13107.200 - 13166.778: 97.8926% ( 2) 00:08:35.917 13166.778 - 13226.356: 97.9808% ( 11) 00:08:35.917 13226.356 - 13285.935: 98.0208% ( 5) 00:08:35.917 13285.935 - 13345.513: 98.0529% ( 4) 00:08:35.917 13345.513 - 13405.091: 98.0769% ( 3) 00:08:35.917 13405.091 - 13464.669: 98.0849% ( 1) 00:08:35.917 13464.669 - 13524.247: 98.1090% ( 3) 00:08:35.917 13524.247 - 13583.825: 98.1250% ( 2) 00:08:35.917 13583.825 - 13643.404: 98.1410% ( 2) 00:08:35.917 13643.404 - 13702.982: 98.1571% ( 2) 00:08:35.917 13702.982 - 13762.560: 98.1811% ( 3) 00:08:35.917 13762.560 - 13822.138: 98.1971% ( 2) 00:08:35.917 13822.138 - 13881.716: 98.2372% ( 5) 00:08:35.917 13881.716 - 13941.295: 98.2612% ( 3) 00:08:35.917 13941.295 - 14000.873: 98.2853% ( 3) 00:08:35.917 14000.873 - 14060.451: 98.3333% ( 6) 00:08:35.917 14060.451 - 14120.029: 98.3734% ( 5) 00:08:35.917 14120.029 - 14179.607: 98.4135% ( 5) 00:08:35.917 14179.607 - 14239.185: 98.4696% ( 7) 00:08:35.917 14239.185 - 14298.764: 98.5016% ( 4) 00:08:35.917 14298.764 - 14358.342: 98.5417% ( 5) 00:08:35.917 14358.342 - 14417.920: 98.5897% ( 6) 00:08:35.917 14417.920 - 14477.498: 98.6298% ( 5) 00:08:35.917 14477.498 - 14537.076: 98.6699% ( 5) 00:08:35.917 14537.076 - 14596.655: 98.7420% ( 9) 00:08:35.917 14596.655 - 14656.233: 98.7580% ( 2) 00:08:35.917 14656.233 - 14715.811: 98.7660% ( 1) 00:08:35.917 14954.124 - 15013.702: 98.7740% ( 1) 00:08:35.917 15013.702 - 15073.280: 98.7981% ( 3) 00:08:35.917 15073.280 - 15132.858: 98.8061% ( 1) 00:08:35.917 15192.436 - 15252.015: 98.8221% ( 2) 00:08:35.917 15252.015 - 15371.171: 98.8462% ( 3) 00:08:35.917 15371.171 - 15490.327: 98.8782% ( 4) 00:08:35.917 15490.327 - 15609.484: 98.9103% ( 4) 00:08:35.917 15609.484 - 15728.640: 98.9343% ( 3) 00:08:35.917 15728.640 - 15847.796: 98.9583% ( 3) 00:08:35.917 15847.796 - 15966.953: 98.9744% ( 2) 00:08:35.917 23354.647 - 23473.804: 98.9984% ( 3) 00:08:35.917 23473.804 - 23592.960: 99.0224% ( 3) 00:08:35.917 23592.960 - 23712.116: 99.0465% ( 3) 00:08:35.917 23712.116 - 23831.273: 99.0705% ( 3) 00:08:35.917 23831.273 - 23950.429: 99.0946% ( 3) 00:08:35.917 23950.429 - 24069.585: 99.1186% ( 3) 00:08:35.917 24069.585 - 24188.742: 99.1426% ( 3) 00:08:35.917 24188.742 - 24307.898: 99.1587% ( 2) 00:08:35.917 24307.898 - 24427.055: 99.1907% ( 4) 00:08:35.917 24427.055 - 24546.211: 99.2147% ( 3) 00:08:35.917 24546.211 - 24665.367: 99.2468% ( 4) 00:08:35.917 24665.367 - 24784.524: 99.2708% ( 3) 00:08:35.917 24784.524 - 24903.680: 99.2949% ( 3) 00:08:35.917 24903.680 - 25022.836: 99.3189% ( 3) 00:08:35.917 25022.836 - 25141.993: 99.3510% ( 4) 00:08:35.917 25141.993 - 25261.149: 99.3830% ( 4) 00:08:35.917 25261.149 - 25380.305: 99.3990% ( 2) 00:08:35.917 25380.305 - 25499.462: 99.4311% ( 4) 00:08:35.917 25499.462 - 25618.618: 99.4471% ( 2) 00:08:35.917 25618.618 - 25737.775: 99.4631% ( 2) 00:08:35.917 25737.775 - 25856.931: 99.4792% ( 2) 00:08:35.917 25856.931 - 25976.087: 99.4872% ( 1) 00:08:35.917 33125.469 - 33363.782: 99.4952% ( 1) 00:08:35.917 33363.782 - 33602.095: 99.5353% ( 5) 00:08:35.917 33602.095 - 33840.407: 99.5913% ( 7) 00:08:35.917 33840.407 - 34078.720: 99.6474% ( 7) 00:08:35.917 34078.720 - 34317.033: 99.7035% ( 7) 00:08:35.917 34317.033 - 34555.345: 99.7837% ( 10) 00:08:35.917 34555.345 - 34793.658: 99.8718% ( 11) 00:08:35.917 34793.658 - 35031.971: 99.9519% ( 10) 00:08:35.917 35031.971 - 35270.284: 100.0000% ( 6) 00:08:35.917 00:08:35.917 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:35.917 ============================================================================== 00:08:35.917 Range in us Cumulative IO count 00:08:35.917 6047.185 - 6076.975: 0.0721% ( 9) 00:08:35.917 6076.975 - 6106.764: 0.1122% ( 5) 00:08:35.917 6106.764 - 6136.553: 0.1282% ( 2) 00:08:35.917 6136.553 - 6166.342: 0.1442% ( 2) 00:08:35.917 6166.342 - 6196.131: 0.1522% ( 1) 00:08:35.917 6196.131 - 6225.920: 0.1683% ( 2) 00:08:35.917 6225.920 - 6255.709: 0.1763% ( 1) 00:08:35.917 6255.709 - 6285.498: 0.1843% ( 1) 00:08:35.917 6285.498 - 6315.287: 0.2083% ( 3) 00:08:35.917 6315.287 - 6345.076: 0.2163% ( 1) 00:08:35.917 6345.076 - 6374.865: 0.2324% ( 2) 00:08:35.917 6374.865 - 6404.655: 0.2484% ( 2) 00:08:35.917 6404.655 - 6434.444: 0.2644% ( 2) 00:08:35.917 6434.444 - 6464.233: 0.2724% ( 1) 00:08:35.917 6464.233 - 6494.022: 0.2965% ( 3) 00:08:35.917 6494.022 - 6523.811: 0.3125% ( 2) 00:08:35.917 6523.811 - 6553.600: 0.3205% ( 1) 00:08:35.917 6553.600 - 6583.389: 0.3365% ( 2) 00:08:35.917 6583.389 - 6613.178: 0.3606% ( 3) 00:08:35.917 6613.178 - 6642.967: 0.3686% ( 1) 00:08:35.917 6642.967 - 6672.756: 0.3926% ( 3) 00:08:35.917 6672.756 - 6702.545: 0.4087% ( 2) 00:08:35.917 6702.545 - 6732.335: 0.4247% ( 2) 00:08:35.917 6732.335 - 6762.124: 0.4487% ( 3) 00:08:35.917 6762.124 - 6791.913: 0.4647% ( 2) 00:08:35.917 6791.913 - 6821.702: 0.4808% ( 2) 00:08:35.918 6821.702 - 6851.491: 0.4968% ( 2) 00:08:35.918 6851.491 - 6881.280: 0.5128% ( 2) 00:08:35.918 8162.211 - 8221.789: 0.5208% ( 1) 00:08:35.918 8221.789 - 8281.367: 0.5929% ( 9) 00:08:35.918 8281.367 - 8340.945: 0.6811% ( 11) 00:08:35.918 8340.945 - 8400.524: 0.8494% ( 21) 00:08:35.918 8400.524 - 8460.102: 1.0016% ( 19) 00:08:35.918 8460.102 - 8519.680: 1.3221% ( 40) 00:08:35.918 8519.680 - 8579.258: 1.7869% ( 58) 00:08:35.918 8579.258 - 8638.836: 2.3558% ( 71) 00:08:35.918 8638.836 - 8698.415: 3.2532% ( 112) 00:08:35.918 8698.415 - 8757.993: 4.8798% ( 203) 00:08:35.918 8757.993 - 8817.571: 6.7308% ( 231) 00:08:35.918 8817.571 - 8877.149: 8.8862% ( 269) 00:08:35.918 8877.149 - 8936.727: 11.0016% ( 264) 00:08:35.918 8936.727 - 8996.305: 13.1811% ( 272) 00:08:35.918 8996.305 - 9055.884: 15.0881% ( 238) 00:08:35.918 9055.884 - 9115.462: 16.8269% ( 217) 00:08:35.918 9115.462 - 9175.040: 18.4936% ( 208) 00:08:35.918 9175.040 - 9234.618: 20.0240% ( 191) 00:08:35.918 9234.618 - 9294.196: 21.5705% ( 193) 00:08:35.918 9294.196 - 9353.775: 23.2292% ( 207) 00:08:35.918 9353.775 - 9413.353: 24.9359% ( 213) 00:08:35.918 9413.353 - 9472.931: 26.7468% ( 226) 00:08:35.918 9472.931 - 9532.509: 29.1106% ( 295) 00:08:35.918 9532.509 - 9592.087: 31.9151% ( 350) 00:08:35.918 9592.087 - 9651.665: 34.9199% ( 375) 00:08:35.918 9651.665 - 9711.244: 37.5561% ( 329) 00:08:35.918 9711.244 - 9770.822: 40.4968% ( 367) 00:08:35.918 9770.822 - 9830.400: 43.5817% ( 385) 00:08:35.918 9830.400 - 9889.978: 46.5224% ( 367) 00:08:35.918 9889.978 - 9949.556: 49.3349% ( 351) 00:08:35.918 9949.556 - 10009.135: 51.9231% ( 323) 00:08:35.918 10009.135 - 10068.713: 54.5913% ( 333) 00:08:35.918 10068.713 - 10128.291: 57.5000% ( 363) 00:08:35.918 10128.291 - 10187.869: 60.4888% ( 373) 00:08:35.918 10187.869 - 10247.447: 63.6458% ( 394) 00:08:35.918 10247.447 - 10307.025: 66.5064% ( 357) 00:08:35.918 10307.025 - 10366.604: 69.1747% ( 333) 00:08:35.918 10366.604 - 10426.182: 71.5865% ( 301) 00:08:35.918 10426.182 - 10485.760: 73.7260% ( 267) 00:08:35.918 10485.760 - 10545.338: 75.5769% ( 231) 00:08:35.918 10545.338 - 10604.916: 77.2837% ( 213) 00:08:35.918 10604.916 - 10664.495: 78.7099% ( 178) 00:08:35.918 10664.495 - 10724.073: 79.9038% ( 149) 00:08:35.918 10724.073 - 10783.651: 80.8974% ( 124) 00:08:35.918 10783.651 - 10843.229: 81.7067% ( 101) 00:08:35.918 10843.229 - 10902.807: 82.4199% ( 89) 00:08:35.918 10902.807 - 10962.385: 83.0208% ( 75) 00:08:35.918 10962.385 - 11021.964: 83.6218% ( 75) 00:08:35.918 11021.964 - 11081.542: 84.1667% ( 68) 00:08:35.918 11081.542 - 11141.120: 84.7676% ( 75) 00:08:35.918 11141.120 - 11200.698: 85.4327% ( 83) 00:08:35.918 11200.698 - 11260.276: 86.1218% ( 86) 00:08:35.918 11260.276 - 11319.855: 86.7949% ( 84) 00:08:35.918 11319.855 - 11379.433: 87.3638% ( 71) 00:08:35.918 11379.433 - 11439.011: 87.9006% ( 67) 00:08:35.918 11439.011 - 11498.589: 88.4215% ( 65) 00:08:35.918 11498.589 - 11558.167: 89.0304% ( 76) 00:08:35.918 11558.167 - 11617.745: 89.7516% ( 90) 00:08:35.918 11617.745 - 11677.324: 90.4888% ( 92) 00:08:35.918 11677.324 - 11736.902: 91.3702% ( 110) 00:08:35.918 11736.902 - 11796.480: 92.1314% ( 95) 00:08:35.918 11796.480 - 11856.058: 92.8446% ( 89) 00:08:35.918 11856.058 - 11915.636: 93.4936% ( 81) 00:08:35.918 11915.636 - 11975.215: 94.1106% ( 77) 00:08:35.918 11975.215 - 12034.793: 94.6635% ( 69) 00:08:35.918 12034.793 - 12094.371: 95.1282% ( 58) 00:08:35.918 12094.371 - 12153.949: 95.5208% ( 49) 00:08:35.918 12153.949 - 12213.527: 95.8654% ( 43) 00:08:35.918 12213.527 - 12273.105: 96.2019% ( 42) 00:08:35.918 12273.105 - 12332.684: 96.4904% ( 36) 00:08:35.918 12332.684 - 12392.262: 96.7708% ( 35) 00:08:35.918 12392.262 - 12451.840: 97.0513% ( 35) 00:08:35.918 12451.840 - 12511.418: 97.3397% ( 36) 00:08:35.918 12511.418 - 12570.996: 97.5240% ( 23) 00:08:35.918 12570.996 - 12630.575: 97.6282% ( 13) 00:08:35.918 12630.575 - 12690.153: 97.7163% ( 11) 00:08:35.918 12690.153 - 12749.731: 97.7644% ( 6) 00:08:35.918 12749.731 - 12809.309: 97.8045% ( 5) 00:08:35.918 12809.309 - 12868.887: 97.8365% ( 4) 00:08:35.918 12868.887 - 12928.465: 97.8686% ( 4) 00:08:35.918 12928.465 - 12988.044: 97.9087% ( 5) 00:08:35.918 12988.044 - 13047.622: 97.9327% ( 3) 00:08:35.918 13047.622 - 13107.200: 97.9647% ( 4) 00:08:35.918 13107.200 - 13166.778: 98.0369% ( 9) 00:08:35.918 13166.778 - 13226.356: 98.2051% ( 21) 00:08:35.918 13226.356 - 13285.935: 98.2372% ( 4) 00:08:35.918 13285.935 - 13345.513: 98.2452% ( 1) 00:08:35.918 13345.513 - 13405.091: 98.2692% ( 3) 00:08:35.918 13405.091 - 13464.669: 98.3173% ( 6) 00:08:35.918 13464.669 - 13524.247: 98.3814% ( 8) 00:08:35.918 13524.247 - 13583.825: 98.4455% ( 8) 00:08:35.918 13583.825 - 13643.404: 98.4776% ( 4) 00:08:35.918 13643.404 - 13702.982: 98.5497% ( 9) 00:08:35.918 13702.982 - 13762.560: 98.5897% ( 5) 00:08:35.918 13762.560 - 13822.138: 98.6298% ( 5) 00:08:35.918 13822.138 - 13881.716: 98.6939% ( 8) 00:08:35.918 13881.716 - 13941.295: 98.7260% ( 4) 00:08:35.918 13941.295 - 14000.873: 98.7660% ( 5) 00:08:35.918 14000.873 - 14060.451: 98.7901% ( 3) 00:08:35.918 14060.451 - 14120.029: 98.8221% ( 4) 00:08:35.918 14120.029 - 14179.607: 98.8542% ( 4) 00:08:35.918 14179.607 - 14239.185: 98.8942% ( 5) 00:08:35.918 14239.185 - 14298.764: 98.9103% ( 2) 00:08:35.918 14298.764 - 14358.342: 98.9263% ( 2) 00:08:35.918 14358.342 - 14417.920: 98.9423% ( 2) 00:08:35.918 14417.920 - 14477.498: 98.9583% ( 2) 00:08:35.918 14477.498 - 14537.076: 98.9744% ( 2) 00:08:35.918 23950.429 - 24069.585: 98.9824% ( 1) 00:08:35.918 24069.585 - 24188.742: 99.0144% ( 4) 00:08:35.918 24188.742 - 24307.898: 99.0385% ( 3) 00:08:35.918 24307.898 - 24427.055: 99.0705% ( 4) 00:08:35.918 24427.055 - 24546.211: 99.0946% ( 3) 00:08:35.918 24546.211 - 24665.367: 99.1266% ( 4) 00:08:35.918 24665.367 - 24784.524: 99.1587% ( 4) 00:08:35.918 24784.524 - 24903.680: 99.1827% ( 3) 00:08:35.918 24903.680 - 25022.836: 99.2067% ( 3) 00:08:35.918 25022.836 - 25141.993: 99.2388% ( 4) 00:08:35.918 25141.993 - 25261.149: 99.2708% ( 4) 00:08:35.918 25261.149 - 25380.305: 99.2869% ( 2) 00:08:35.918 25380.305 - 25499.462: 99.3109% ( 3) 00:08:35.918 25499.462 - 25618.618: 99.3429% ( 4) 00:08:35.918 25618.618 - 25737.775: 99.3750% ( 4) 00:08:35.918 25737.775 - 25856.931: 99.4071% ( 4) 00:08:35.918 25856.931 - 25976.087: 99.4391% ( 4) 00:08:35.918 25976.087 - 26095.244: 99.4631% ( 3) 00:08:35.918 26095.244 - 26214.400: 99.4872% ( 3) 00:08:35.918 32648.844 - 32887.156: 99.5513% ( 8) 00:08:35.918 32887.156 - 33125.469: 99.6394% ( 11) 00:08:35.918 33125.469 - 33363.782: 99.7356% ( 12) 00:08:35.918 33363.782 - 33602.095: 99.8237% ( 11) 00:08:35.918 33602.095 - 33840.407: 99.8958% ( 9) 00:08:35.918 33840.407 - 34078.720: 99.9840% ( 11) 00:08:35.918 34078.720 - 34317.033: 100.0000% ( 2) 00:08:35.918 00:08:35.918 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:35.918 ============================================================================== 00:08:35.918 Range in us Cumulative IO count 00:08:35.918 5302.458 - 5332.247: 0.0401% ( 5) 00:08:35.918 5332.247 - 5362.036: 0.0801% ( 5) 00:08:35.918 5362.036 - 5391.825: 0.1763% ( 12) 00:08:35.918 5391.825 - 5421.615: 0.2324% ( 7) 00:08:35.918 5421.615 - 5451.404: 0.2404% ( 1) 00:08:35.918 5451.404 - 5481.193: 0.2564% ( 2) 00:08:35.918 5481.193 - 5510.982: 0.2724% ( 2) 00:08:35.918 5510.982 - 5540.771: 0.2885% ( 2) 00:08:35.918 5540.771 - 5570.560: 0.2965% ( 1) 00:08:35.918 5570.560 - 5600.349: 0.3125% ( 2) 00:08:35.918 5600.349 - 5630.138: 0.3285% ( 2) 00:08:35.918 5630.138 - 5659.927: 0.3446% ( 2) 00:08:35.918 5659.927 - 5689.716: 0.3526% ( 1) 00:08:35.918 5689.716 - 5719.505: 0.3686% ( 2) 00:08:35.918 5719.505 - 5749.295: 0.3766% ( 1) 00:08:35.918 5749.295 - 5779.084: 0.3926% ( 2) 00:08:35.918 5779.084 - 5808.873: 0.4087% ( 2) 00:08:35.918 5808.873 - 5838.662: 0.4167% ( 1) 00:08:35.918 5838.662 - 5868.451: 0.4327% ( 2) 00:08:35.918 5868.451 - 5898.240: 0.4407% ( 1) 00:08:35.918 5898.240 - 5928.029: 0.4567% ( 2) 00:08:35.918 5928.029 - 5957.818: 0.4647% ( 1) 00:08:35.918 5957.818 - 5987.607: 0.4808% ( 2) 00:08:35.918 5987.607 - 6017.396: 0.4968% ( 2) 00:08:35.918 6017.396 - 6047.185: 0.5128% ( 2) 00:08:35.918 7804.742 - 7864.320: 0.6090% ( 12) 00:08:35.918 7864.320 - 7923.898: 0.6891% ( 10) 00:08:35.918 7923.898 - 7983.476: 0.7212% ( 4) 00:08:35.918 7983.476 - 8043.055: 0.7532% ( 4) 00:08:35.918 8043.055 - 8102.633: 0.7853% ( 4) 00:08:35.918 8102.633 - 8162.211: 0.8093% ( 3) 00:08:35.918 8162.211 - 8221.789: 0.8333% ( 3) 00:08:35.918 8221.789 - 8281.367: 0.8894% ( 7) 00:08:35.918 8281.367 - 8340.945: 1.0417% ( 19) 00:08:35.918 8340.945 - 8400.524: 1.3061% ( 33) 00:08:35.918 8400.524 - 8460.102: 1.6827% ( 47) 00:08:35.918 8460.102 - 8519.680: 2.1154% ( 54) 00:08:35.918 8519.680 - 8579.258: 2.6362% ( 65) 00:08:35.918 8579.258 - 8638.836: 3.2772% ( 80) 00:08:35.918 8638.836 - 8698.415: 4.3029% ( 128) 00:08:35.918 8698.415 - 8757.993: 5.7933% ( 186) 00:08:35.918 8757.993 - 8817.571: 7.1875% ( 174) 00:08:35.918 8817.571 - 8877.149: 8.8462% ( 207) 00:08:35.918 8877.149 - 8936.727: 10.6410% ( 224) 00:08:35.918 8936.727 - 8996.305: 12.6763% ( 254) 00:08:35.918 8996.305 - 9055.884: 14.8718% ( 274) 00:08:35.918 9055.884 - 9115.462: 16.8109% ( 242) 00:08:35.918 9115.462 - 9175.040: 18.4696% ( 207) 00:08:35.918 9175.040 - 9234.618: 20.1362% ( 208) 00:08:35.918 9234.618 - 9294.196: 21.7548% ( 202) 00:08:35.918 9294.196 - 9353.775: 23.4615% ( 213) 00:08:35.918 9353.775 - 9413.353: 25.2003% ( 217) 00:08:35.918 9413.353 - 9472.931: 27.1394% ( 242) 00:08:35.918 9472.931 - 9532.509: 29.2869% ( 268) 00:08:35.918 9532.509 - 9592.087: 31.8590% ( 321) 00:08:35.918 9592.087 - 9651.665: 34.6314% ( 346) 00:08:35.918 9651.665 - 9711.244: 37.3317% ( 337) 00:08:35.918 9711.244 - 9770.822: 40.2003% ( 358) 00:08:35.918 9770.822 - 9830.400: 43.4215% ( 402) 00:08:35.918 9830.400 - 9889.978: 46.4263% ( 375) 00:08:35.918 9889.978 - 9949.556: 49.5353% ( 388) 00:08:35.918 9949.556 - 10009.135: 52.4359% ( 362) 00:08:35.918 10009.135 - 10068.713: 55.2404% ( 350) 00:08:35.918 10068.713 - 10128.291: 58.1571% ( 364) 00:08:35.918 10128.291 - 10187.869: 61.1138% ( 369) 00:08:35.918 10187.869 - 10247.447: 64.0064% ( 361) 00:08:35.918 10247.447 - 10307.025: 66.7228% ( 339) 00:08:35.918 10307.025 - 10366.604: 69.2388% ( 314) 00:08:35.918 10366.604 - 10426.182: 71.5465% ( 288) 00:08:35.918 10426.182 - 10485.760: 73.6779% ( 266) 00:08:35.918 10485.760 - 10545.338: 75.6410% ( 245) 00:08:35.918 10545.338 - 10604.916: 77.4038% ( 220) 00:08:35.918 10604.916 - 10664.495: 78.8061% ( 175) 00:08:35.919 10664.495 - 10724.073: 80.0401% ( 154) 00:08:35.919 10724.073 - 10783.651: 81.0978% ( 132) 00:08:35.919 10783.651 - 10843.229: 81.8429% ( 93) 00:08:35.919 10843.229 - 10902.807: 82.5321% ( 86) 00:08:35.919 10902.807 - 10962.385: 83.1731% ( 80) 00:08:35.919 10962.385 - 11021.964: 83.8782% ( 88) 00:08:35.919 11021.964 - 11081.542: 84.5112% ( 79) 00:08:35.919 11081.542 - 11141.120: 85.0240% ( 64) 00:08:35.919 11141.120 - 11200.698: 85.5849% ( 70) 00:08:35.919 11200.698 - 11260.276: 86.1298% ( 68) 00:08:35.919 11260.276 - 11319.855: 86.6667% ( 67) 00:08:35.919 11319.855 - 11379.433: 87.2596% ( 74) 00:08:35.919 11379.433 - 11439.011: 87.9487% ( 86) 00:08:35.919 11439.011 - 11498.589: 88.6779% ( 91) 00:08:35.919 11498.589 - 11558.167: 89.4151% ( 92) 00:08:35.919 11558.167 - 11617.745: 90.1923% ( 97) 00:08:35.919 11617.745 - 11677.324: 90.9455% ( 94) 00:08:35.919 11677.324 - 11736.902: 91.7067% ( 95) 00:08:35.919 11736.902 - 11796.480: 92.3798% ( 84) 00:08:35.919 11796.480 - 11856.058: 92.9728% ( 74) 00:08:35.919 11856.058 - 11915.636: 93.5096% ( 67) 00:08:35.919 11915.636 - 11975.215: 93.9744% ( 58) 00:08:35.919 11975.215 - 12034.793: 94.4792% ( 63) 00:08:35.919 12034.793 - 12094.371: 94.9519% ( 59) 00:08:35.919 12094.371 - 12153.949: 95.4247% ( 59) 00:08:35.919 12153.949 - 12213.527: 95.8494% ( 53) 00:08:35.919 12213.527 - 12273.105: 96.2099% ( 45) 00:08:35.919 12273.105 - 12332.684: 96.5625% ( 44) 00:08:35.919 12332.684 - 12392.262: 96.8670% ( 38) 00:08:35.919 12392.262 - 12451.840: 97.1715% ( 38) 00:08:35.919 12451.840 - 12511.418: 97.3718% ( 25) 00:08:35.919 12511.418 - 12570.996: 97.5240% ( 19) 00:08:35.919 12570.996 - 12630.575: 97.6362% ( 14) 00:08:35.919 12630.575 - 12690.153: 97.7083% ( 9) 00:08:35.919 12690.153 - 12749.731: 97.7484% ( 5) 00:08:35.919 12749.731 - 12809.309: 97.8125% ( 8) 00:08:35.919 12809.309 - 12868.887: 97.9247% ( 14) 00:08:35.919 12868.887 - 12928.465: 98.0529% ( 16) 00:08:35.919 12928.465 - 12988.044: 98.1571% ( 13) 00:08:35.919 12988.044 - 13047.622: 98.1891% ( 4) 00:08:35.919 13047.622 - 13107.200: 98.2292% ( 5) 00:08:35.919 13107.200 - 13166.778: 98.2452% ( 2) 00:08:35.919 13166.778 - 13226.356: 98.2853% ( 5) 00:08:35.919 13226.356 - 13285.935: 98.3173% ( 4) 00:08:35.919 13285.935 - 13345.513: 98.3333% ( 2) 00:08:35.919 13345.513 - 13405.091: 98.3894% ( 7) 00:08:35.919 13405.091 - 13464.669: 98.4535% ( 8) 00:08:35.919 13464.669 - 13524.247: 98.5016% ( 6) 00:08:35.919 13524.247 - 13583.825: 98.5577% ( 7) 00:08:35.919 13583.825 - 13643.404: 98.5978% ( 5) 00:08:35.919 13643.404 - 13702.982: 98.6619% ( 8) 00:08:35.919 13702.982 - 13762.560: 98.7019% ( 5) 00:08:35.919 13762.560 - 13822.138: 98.7340% ( 4) 00:08:35.919 13822.138 - 13881.716: 98.7580% ( 3) 00:08:35.919 13881.716 - 13941.295: 98.7821% ( 3) 00:08:35.919 13941.295 - 14000.873: 98.7981% ( 2) 00:08:35.919 14000.873 - 14060.451: 98.8141% ( 2) 00:08:35.919 14060.451 - 14120.029: 98.8301% ( 2) 00:08:35.919 14120.029 - 14179.607: 98.8542% ( 3) 00:08:35.919 14179.607 - 14239.185: 98.8702% ( 2) 00:08:35.919 14239.185 - 14298.764: 98.8862% ( 2) 00:08:35.919 14298.764 - 14358.342: 98.9103% ( 3) 00:08:35.919 14358.342 - 14417.920: 98.9263% ( 2) 00:08:35.919 14417.920 - 14477.498: 98.9423% ( 2) 00:08:35.919 14477.498 - 14537.076: 98.9663% ( 3) 00:08:35.919 14537.076 - 14596.655: 98.9744% ( 1) 00:08:35.919 24069.585 - 24188.742: 98.9824% ( 1) 00:08:35.919 24188.742 - 24307.898: 99.0064% ( 3) 00:08:35.919 24307.898 - 24427.055: 99.0385% ( 4) 00:08:35.919 24427.055 - 24546.211: 99.0705% ( 4) 00:08:35.919 24546.211 - 24665.367: 99.1026% ( 4) 00:08:35.919 24665.367 - 24784.524: 99.1266% ( 3) 00:08:35.919 24784.524 - 24903.680: 99.1506% ( 3) 00:08:35.919 24903.680 - 25022.836: 99.1747% ( 3) 00:08:35.919 25022.836 - 25141.993: 99.2067% ( 4) 00:08:35.919 25141.993 - 25261.149: 99.2308% ( 3) 00:08:35.919 25261.149 - 25380.305: 99.2628% ( 4) 00:08:35.919 25380.305 - 25499.462: 99.2949% ( 4) 00:08:35.919 25499.462 - 25618.618: 99.3349% ( 5) 00:08:35.919 25618.618 - 25737.775: 99.3670% ( 4) 00:08:35.919 25737.775 - 25856.931: 99.3990% ( 4) 00:08:35.919 25856.931 - 25976.087: 99.4391% ( 5) 00:08:35.919 25976.087 - 26095.244: 99.4712% ( 4) 00:08:35.919 26095.244 - 26214.400: 99.4872% ( 2) 00:08:35.919 32648.844 - 32887.156: 99.5272% ( 5) 00:08:35.919 32887.156 - 33125.469: 99.6154% ( 11) 00:08:35.919 33125.469 - 33363.782: 99.6955% ( 10) 00:08:35.919 33363.782 - 33602.095: 99.7756% ( 10) 00:08:35.919 33602.095 - 33840.407: 99.8478% ( 9) 00:08:35.919 33840.407 - 34078.720: 99.9279% ( 10) 00:08:35.919 34078.720 - 34317.033: 100.0000% ( 9) 00:08:35.919 00:08:35.919 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:35.919 ============================================================================== 00:08:35.919 Range in us Cumulative IO count 00:08:35.919 4944.989 - 4974.778: 0.1442% ( 18) 00:08:35.919 4974.778 - 5004.567: 0.1683% ( 3) 00:08:35.919 5034.356 - 5064.145: 0.1843% ( 2) 00:08:35.919 5064.145 - 5093.935: 0.2003% ( 2) 00:08:35.919 5093.935 - 5123.724: 0.2163% ( 2) 00:08:35.919 5123.724 - 5153.513: 0.2244% ( 1) 00:08:35.919 5153.513 - 5183.302: 0.2404% ( 2) 00:08:35.919 5183.302 - 5213.091: 0.2564% ( 2) 00:08:35.919 5213.091 - 5242.880: 0.2724% ( 2) 00:08:35.919 5242.880 - 5272.669: 0.2885% ( 2) 00:08:35.919 5272.669 - 5302.458: 0.2965% ( 1) 00:08:35.919 5302.458 - 5332.247: 0.3125% ( 2) 00:08:35.919 5332.247 - 5362.036: 0.3285% ( 2) 00:08:35.919 5362.036 - 5391.825: 0.3446% ( 2) 00:08:35.919 5391.825 - 5421.615: 0.3606% ( 2) 00:08:35.919 5421.615 - 5451.404: 0.3846% ( 3) 00:08:35.919 5451.404 - 5481.193: 0.4006% ( 2) 00:08:35.919 5481.193 - 5510.982: 0.4167% ( 2) 00:08:35.919 5510.982 - 5540.771: 0.4327% ( 2) 00:08:35.919 5540.771 - 5570.560: 0.4567% ( 3) 00:08:35.919 5570.560 - 5600.349: 0.4728% ( 2) 00:08:35.919 5600.349 - 5630.138: 0.4888% ( 2) 00:08:35.919 5630.138 - 5659.927: 0.5128% ( 3) 00:08:35.919 7506.851 - 7536.640: 0.5208% ( 1) 00:08:35.919 7536.640 - 7566.429: 0.5689% ( 6) 00:08:35.919 7566.429 - 7596.218: 0.6170% ( 6) 00:08:35.919 7596.218 - 7626.007: 0.6410% ( 3) 00:08:35.919 7626.007 - 7685.585: 0.7372% ( 12) 00:08:35.919 7685.585 - 7745.164: 0.7692% ( 4) 00:08:35.919 7745.164 - 7804.742: 0.8013% ( 4) 00:08:35.919 7804.742 - 7864.320: 0.8253% ( 3) 00:08:35.919 7864.320 - 7923.898: 0.8574% ( 4) 00:08:35.919 7923.898 - 7983.476: 0.8814% ( 3) 00:08:35.919 7983.476 - 8043.055: 0.9054% ( 3) 00:08:35.919 8043.055 - 8102.633: 0.9375% ( 4) 00:08:35.919 8102.633 - 8162.211: 0.9615% ( 3) 00:08:35.919 8162.211 - 8221.789: 1.0016% ( 5) 00:08:35.919 8221.789 - 8281.367: 1.0497% ( 6) 00:08:35.919 8281.367 - 8340.945: 1.1538% ( 13) 00:08:35.919 8340.945 - 8400.524: 1.2821% ( 16) 00:08:35.919 8400.524 - 8460.102: 1.4183% ( 17) 00:08:35.919 8460.102 - 8519.680: 1.7388% ( 40) 00:08:35.919 8519.680 - 8579.258: 2.1955% ( 57) 00:08:35.919 8579.258 - 8638.836: 2.9487% ( 94) 00:08:35.919 8638.836 - 8698.415: 4.0224% ( 134) 00:08:35.919 8698.415 - 8757.993: 5.6010% ( 197) 00:08:35.919 8757.993 - 8817.571: 7.1554% ( 194) 00:08:35.919 8817.571 - 8877.149: 8.7901% ( 204) 00:08:35.919 8877.149 - 8936.727: 10.6250% ( 229) 00:08:35.919 8936.727 - 8996.305: 12.6362% ( 251) 00:08:35.919 8996.305 - 9055.884: 14.5673% ( 241) 00:08:35.919 9055.884 - 9115.462: 16.4744% ( 238) 00:08:35.919 9115.462 - 9175.040: 18.2131% ( 217) 00:08:35.919 9175.040 - 9234.618: 19.8237% ( 201) 00:08:35.919 9234.618 - 9294.196: 21.5705% ( 218) 00:08:35.919 9294.196 - 9353.775: 23.4215% ( 231) 00:08:35.919 9353.775 - 9413.353: 25.2404% ( 227) 00:08:35.919 9413.353 - 9472.931: 27.3317% ( 261) 00:08:35.919 9472.931 - 9532.509: 29.6715% ( 292) 00:08:35.919 9532.509 - 9592.087: 32.2436% ( 321) 00:08:35.919 9592.087 - 9651.665: 34.9519% ( 338) 00:08:35.919 9651.665 - 9711.244: 37.8205% ( 358) 00:08:35.919 9711.244 - 9770.822: 40.4647% ( 330) 00:08:35.919 9770.822 - 9830.400: 43.2933% ( 353) 00:08:35.919 9830.400 - 9889.978: 46.1298% ( 354) 00:08:35.919 9889.978 - 9949.556: 49.0705% ( 367) 00:08:35.919 9949.556 - 10009.135: 51.9792% ( 363) 00:08:35.919 10009.135 - 10068.713: 55.0881% ( 388) 00:08:35.919 10068.713 - 10128.291: 58.1010% ( 376) 00:08:35.919 10128.291 - 10187.869: 61.1779% ( 384) 00:08:35.919 10187.869 - 10247.447: 64.1587% ( 372) 00:08:35.919 10247.447 - 10307.025: 67.1635% ( 375) 00:08:35.919 10307.025 - 10366.604: 69.7436% ( 322) 00:08:35.919 10366.604 - 10426.182: 71.9872% ( 280) 00:08:35.919 10426.182 - 10485.760: 74.0304% ( 255) 00:08:35.919 10485.760 - 10545.338: 75.9135% ( 235) 00:08:35.919 10545.338 - 10604.916: 77.6843% ( 221) 00:08:35.919 10604.916 - 10664.495: 79.2147% ( 191) 00:08:35.919 10664.495 - 10724.073: 80.5208% ( 163) 00:08:35.919 10724.073 - 10783.651: 81.5304% ( 126) 00:08:35.919 10783.651 - 10843.229: 82.3077% ( 97) 00:08:35.919 10843.229 - 10902.807: 82.9247% ( 77) 00:08:35.919 10902.807 - 10962.385: 83.4455% ( 65) 00:08:35.919 10962.385 - 11021.964: 83.9744% ( 66) 00:08:35.919 11021.964 - 11081.542: 84.4391% ( 58) 00:08:35.919 11081.542 - 11141.120: 84.8317% ( 49) 00:08:35.919 11141.120 - 11200.698: 85.3606% ( 66) 00:08:35.919 11200.698 - 11260.276: 85.9615% ( 75) 00:08:35.919 11260.276 - 11319.855: 86.6506% ( 86) 00:08:35.919 11319.855 - 11379.433: 87.2596% ( 76) 00:08:35.919 11379.433 - 11439.011: 87.9808% ( 90) 00:08:35.919 11439.011 - 11498.589: 88.7099% ( 91) 00:08:35.919 11498.589 - 11558.167: 89.3910% ( 85) 00:08:35.919 11558.167 - 11617.745: 90.0641% ( 84) 00:08:35.919 11617.745 - 11677.324: 90.9215% ( 107) 00:08:35.919 11677.324 - 11736.902: 91.6266% ( 88) 00:08:35.919 11736.902 - 11796.480: 92.2676% ( 80) 00:08:35.919 11796.480 - 11856.058: 92.8606% ( 74) 00:08:35.919 11856.058 - 11915.636: 93.5096% ( 81) 00:08:35.919 11915.636 - 11975.215: 94.0625% ( 69) 00:08:35.919 11975.215 - 12034.793: 94.6314% ( 71) 00:08:35.919 12034.793 - 12094.371: 95.1202% ( 61) 00:08:35.919 12094.371 - 12153.949: 95.5609% ( 55) 00:08:35.919 12153.949 - 12213.527: 95.9696% ( 51) 00:08:35.919 12213.527 - 12273.105: 96.3702% ( 50) 00:08:35.919 12273.105 - 12332.684: 96.6506% ( 35) 00:08:35.919 12332.684 - 12392.262: 96.9631% ( 39) 00:08:35.919 12392.262 - 12451.840: 97.2676% ( 38) 00:08:35.919 12451.840 - 12511.418: 97.5721% ( 38) 00:08:35.919 12511.418 - 12570.996: 97.8365% ( 33) 00:08:35.919 12570.996 - 12630.575: 97.9888% ( 19) 00:08:35.919 12630.575 - 12690.153: 98.0689% ( 10) 00:08:35.919 12690.153 - 12749.731: 98.1330% ( 8) 00:08:35.919 12749.731 - 12809.309: 98.1811% ( 6) 00:08:35.919 12809.309 - 12868.887: 98.2452% ( 8) 00:08:35.919 12868.887 - 12928.465: 98.2933% ( 6) 00:08:35.919 12928.465 - 12988.044: 98.3173% ( 3) 00:08:35.919 12988.044 - 13047.622: 98.3494% ( 4) 00:08:35.919 13047.622 - 13107.200: 98.3814% ( 4) 00:08:35.919 13107.200 - 13166.778: 98.4135% ( 4) 00:08:35.919 13166.778 - 13226.356: 98.4375% ( 3) 00:08:35.919 13226.356 - 13285.935: 98.4535% ( 2) 00:08:35.919 13285.935 - 13345.513: 98.4615% ( 1) 00:08:35.919 13345.513 - 13405.091: 98.4856% ( 3) 00:08:35.919 13405.091 - 13464.669: 98.5176% ( 4) 00:08:35.919 13464.669 - 13524.247: 98.5497% ( 4) 00:08:35.919 13524.247 - 13583.825: 98.5978% ( 6) 00:08:35.919 13583.825 - 13643.404: 98.6378% ( 5) 00:08:35.919 13643.404 - 13702.982: 98.6779% ( 5) 00:08:35.919 13702.982 - 13762.560: 98.7420% ( 8) 00:08:35.919 13762.560 - 13822.138: 98.7580% ( 2) 00:08:35.919 13822.138 - 13881.716: 98.7660% ( 1) 00:08:35.919 13881.716 - 13941.295: 98.7821% ( 2) 00:08:35.919 13941.295 - 14000.873: 98.8061% ( 3) 00:08:35.920 14000.873 - 14060.451: 98.8221% ( 2) 00:08:35.920 14060.451 - 14120.029: 98.8301% ( 1) 00:08:35.920 14120.029 - 14179.607: 98.8381% ( 1) 00:08:35.920 14179.607 - 14239.185: 98.8542% ( 2) 00:08:35.920 14239.185 - 14298.764: 98.8702% ( 2) 00:08:35.920 14298.764 - 14358.342: 98.8782% ( 1) 00:08:35.920 14358.342 - 14417.920: 98.9022% ( 3) 00:08:35.920 14417.920 - 14477.498: 98.9263% ( 3) 00:08:35.920 14477.498 - 14537.076: 98.9503% ( 3) 00:08:35.920 14537.076 - 14596.655: 98.9663% ( 2) 00:08:35.920 14596.655 - 14656.233: 98.9744% ( 1) 00:08:35.920 23712.116 - 23831.273: 98.9904% ( 2) 00:08:35.920 23831.273 - 23950.429: 99.0144% ( 3) 00:08:35.920 23950.429 - 24069.585: 99.0465% ( 4) 00:08:35.920 24069.585 - 24188.742: 99.0705% ( 3) 00:08:35.920 24188.742 - 24307.898: 99.1026% ( 4) 00:08:35.920 24307.898 - 24427.055: 99.1266% ( 3) 00:08:35.920 24427.055 - 24546.211: 99.1587% ( 4) 00:08:35.920 24546.211 - 24665.367: 99.1907% ( 4) 00:08:35.920 24665.367 - 24784.524: 99.2228% ( 4) 00:08:35.920 24784.524 - 24903.680: 99.2468% ( 3) 00:08:35.920 24903.680 - 25022.836: 99.2788% ( 4) 00:08:35.920 25022.836 - 25141.993: 99.3029% ( 3) 00:08:35.920 25141.993 - 25261.149: 99.3429% ( 5) 00:08:35.920 25261.149 - 25380.305: 99.3670% ( 3) 00:08:35.920 25380.305 - 25499.462: 99.3990% ( 4) 00:08:35.920 25499.462 - 25618.618: 99.4391% ( 5) 00:08:35.920 25618.618 - 25737.775: 99.4631% ( 3) 00:08:35.920 25737.775 - 25856.931: 99.4872% ( 3) 00:08:35.920 31695.593 - 31933.905: 99.5192% ( 4) 00:08:35.920 31933.905 - 32172.218: 99.6074% ( 11) 00:08:35.920 32172.218 - 32410.531: 99.6795% ( 9) 00:08:35.920 32410.531 - 32648.844: 99.7676% ( 11) 00:08:35.920 32648.844 - 32887.156: 99.8558% ( 11) 00:08:35.920 32887.156 - 33125.469: 99.9439% ( 11) 00:08:35.920 33125.469 - 33363.782: 100.0000% ( 7) 00:08:35.920 00:08:35.920 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:35.920 ============================================================================== 00:08:35.920 Range in us Cumulative IO count 00:08:35.920 4498.153 - 4527.942: 0.0080% ( 1) 00:08:35.920 4706.676 - 4736.465: 0.1282% ( 15) 00:08:35.920 4736.465 - 4766.255: 0.2163% ( 11) 00:08:35.920 4766.255 - 4796.044: 0.2324% ( 2) 00:08:35.920 4796.044 - 4825.833: 0.2484% ( 2) 00:08:35.920 4825.833 - 4855.622: 0.2644% ( 2) 00:08:35.920 4855.622 - 4885.411: 0.2804% ( 2) 00:08:35.920 4885.411 - 4915.200: 0.2885% ( 1) 00:08:35.920 4915.200 - 4944.989: 0.3045% ( 2) 00:08:35.920 4944.989 - 4974.778: 0.3205% ( 2) 00:08:35.920 4974.778 - 5004.567: 0.3365% ( 2) 00:08:35.920 5004.567 - 5034.356: 0.3526% ( 2) 00:08:35.920 5034.356 - 5064.145: 0.3686% ( 2) 00:08:35.920 5064.145 - 5093.935: 0.3846% ( 2) 00:08:35.920 5093.935 - 5123.724: 0.3926% ( 1) 00:08:35.920 5123.724 - 5153.513: 0.4087% ( 2) 00:08:35.920 5153.513 - 5183.302: 0.4247% ( 2) 00:08:35.920 5183.302 - 5213.091: 0.4407% ( 2) 00:08:35.920 5213.091 - 5242.880: 0.4487% ( 1) 00:08:35.920 5242.880 - 5272.669: 0.4647% ( 2) 00:08:35.920 5272.669 - 5302.458: 0.4808% ( 2) 00:08:35.920 5302.458 - 5332.247: 0.5048% ( 3) 00:08:35.920 5332.247 - 5362.036: 0.5128% ( 1) 00:08:35.920 7179.171 - 7208.960: 0.5689% ( 7) 00:08:35.920 7208.960 - 7238.749: 0.6330% ( 8) 00:08:35.920 7238.749 - 7268.538: 0.7051% ( 9) 00:08:35.920 7268.538 - 7298.327: 0.7212% ( 2) 00:08:35.920 7298.327 - 7328.116: 0.7372% ( 2) 00:08:35.920 7328.116 - 7357.905: 0.7532% ( 2) 00:08:35.920 7357.905 - 7387.695: 0.7692% ( 2) 00:08:35.920 7387.695 - 7417.484: 0.7853% ( 2) 00:08:35.920 7417.484 - 7447.273: 0.7933% ( 1) 00:08:35.920 7447.273 - 7477.062: 0.8093% ( 2) 00:08:35.920 7477.062 - 7506.851: 0.8173% ( 1) 00:08:35.920 7506.851 - 7536.640: 0.8333% ( 2) 00:08:35.920 7536.640 - 7566.429: 0.8413% ( 1) 00:08:35.920 7566.429 - 7596.218: 0.8574% ( 2) 00:08:35.920 7596.218 - 7626.007: 0.8734% ( 2) 00:08:35.920 7626.007 - 7685.585: 0.9054% ( 4) 00:08:35.920 7685.585 - 7745.164: 0.9295% ( 3) 00:08:35.920 7745.164 - 7804.742: 0.9696% ( 5) 00:08:35.920 7804.742 - 7864.320: 1.0016% ( 4) 00:08:35.920 7864.320 - 7923.898: 1.0256% ( 3) 00:08:35.920 8162.211 - 8221.789: 1.0337% ( 1) 00:08:35.920 8221.789 - 8281.367: 1.0417% ( 1) 00:08:35.920 8281.367 - 8340.945: 1.0978% ( 7) 00:08:35.920 8340.945 - 8400.524: 1.3462% ( 31) 00:08:35.920 8400.524 - 8460.102: 1.5785% ( 29) 00:08:35.920 8460.102 - 8519.680: 1.8990% ( 40) 00:08:35.920 8519.680 - 8579.258: 2.3718% ( 59) 00:08:35.920 8579.258 - 8638.836: 3.0288% ( 82) 00:08:35.920 8638.836 - 8698.415: 4.0545% ( 128) 00:08:35.920 8698.415 - 8757.993: 5.4647% ( 176) 00:08:35.920 8757.993 - 8817.571: 7.0433% ( 197) 00:08:35.920 8817.571 - 8877.149: 8.6699% ( 203) 00:08:35.920 8877.149 - 8936.727: 10.4327% ( 220) 00:08:35.920 8936.727 - 8996.305: 12.5160% ( 260) 00:08:35.920 8996.305 - 9055.884: 14.4471% ( 241) 00:08:35.920 9055.884 - 9115.462: 16.4503% ( 250) 00:08:35.920 9115.462 - 9175.040: 18.3013% ( 231) 00:08:35.920 9175.040 - 9234.618: 20.0561% ( 219) 00:08:35.920 9234.618 - 9294.196: 21.9551% ( 237) 00:08:35.920 9294.196 - 9353.775: 23.7420% ( 223) 00:08:35.920 9353.775 - 9413.353: 25.5929% ( 231) 00:08:35.920 9413.353 - 9472.931: 27.4679% ( 234) 00:08:35.920 9472.931 - 9532.509: 29.4952% ( 253) 00:08:35.920 9532.509 - 9592.087: 31.9231% ( 303) 00:08:35.920 9592.087 - 9651.665: 34.6474% ( 340) 00:08:35.920 9651.665 - 9711.244: 37.4920% ( 355) 00:08:35.920 9711.244 - 9770.822: 40.2965% ( 350) 00:08:35.920 9770.822 - 9830.400: 43.3173% ( 377) 00:08:35.920 9830.400 - 9889.978: 46.5465% ( 403) 00:08:35.920 9889.978 - 9949.556: 49.2949% ( 343) 00:08:35.920 9949.556 - 10009.135: 52.0833% ( 348) 00:08:35.920 10009.135 - 10068.713: 54.8157% ( 341) 00:08:35.920 10068.713 - 10128.291: 57.8045% ( 373) 00:08:35.920 10128.291 - 10187.869: 60.9936% ( 398) 00:08:35.920 10187.869 - 10247.447: 64.1667% ( 396) 00:08:35.920 10247.447 - 10307.025: 67.0673% ( 362) 00:08:35.920 10307.025 - 10366.604: 69.7917% ( 340) 00:08:35.920 10366.604 - 10426.182: 72.1635% ( 296) 00:08:35.920 10426.182 - 10485.760: 74.3830% ( 277) 00:08:35.920 10485.760 - 10545.338: 76.3221% ( 242) 00:08:35.920 10545.338 - 10604.916: 77.9968% ( 209) 00:08:35.920 10604.916 - 10664.495: 79.4391% ( 180) 00:08:35.920 10664.495 - 10724.073: 80.7452% ( 163) 00:08:35.920 10724.073 - 10783.651: 81.7388% ( 124) 00:08:35.920 10783.651 - 10843.229: 82.4840% ( 93) 00:08:35.920 10843.229 - 10902.807: 83.1250% ( 80) 00:08:35.920 10902.807 - 10962.385: 83.5577% ( 54) 00:08:35.920 10962.385 - 11021.964: 84.0304% ( 59) 00:08:35.920 11021.964 - 11081.542: 84.5753% ( 68) 00:08:35.920 11081.542 - 11141.120: 85.1522% ( 72) 00:08:35.920 11141.120 - 11200.698: 85.7292% ( 72) 00:08:35.920 11200.698 - 11260.276: 86.2981% ( 71) 00:08:35.920 11260.276 - 11319.855: 86.9792% ( 85) 00:08:35.920 11319.855 - 11379.433: 87.5881% ( 76) 00:08:35.920 11379.433 - 11439.011: 88.2532% ( 83) 00:08:35.920 11439.011 - 11498.589: 88.8381% ( 73) 00:08:35.920 11498.589 - 11558.167: 89.4551% ( 77) 00:08:35.920 11558.167 - 11617.745: 90.1282% ( 84) 00:08:35.920 11617.745 - 11677.324: 90.8574% ( 91) 00:08:35.920 11677.324 - 11736.902: 91.5705% ( 89) 00:08:35.920 11736.902 - 11796.480: 92.3157% ( 93) 00:08:35.920 11796.480 - 11856.058: 93.0048% ( 86) 00:08:35.920 11856.058 - 11915.636: 93.5417% ( 67) 00:08:35.920 11915.636 - 11975.215: 94.1907% ( 81) 00:08:35.920 11975.215 - 12034.793: 94.7356% ( 68) 00:08:35.920 12034.793 - 12094.371: 95.2804% ( 68) 00:08:35.920 12094.371 - 12153.949: 95.7772% ( 62) 00:08:35.920 12153.949 - 12213.527: 96.1779% ( 50) 00:08:35.920 12213.527 - 12273.105: 96.5385% ( 45) 00:08:35.920 12273.105 - 12332.684: 96.9311% ( 49) 00:08:35.920 12332.684 - 12392.262: 97.2756% ( 43) 00:08:35.920 12392.262 - 12451.840: 97.5721% ( 37) 00:08:35.920 12451.840 - 12511.418: 97.8125% ( 30) 00:08:35.920 12511.418 - 12570.996: 97.9728% ( 20) 00:08:35.920 12570.996 - 12630.575: 98.1170% ( 18) 00:08:35.920 12630.575 - 12690.153: 98.2051% ( 11) 00:08:35.920 12690.153 - 12749.731: 98.2452% ( 5) 00:08:35.920 12749.731 - 12809.309: 98.2692% ( 3) 00:08:35.920 12809.309 - 12868.887: 98.3013% ( 4) 00:08:35.920 12868.887 - 12928.465: 98.3333% ( 4) 00:08:35.920 12928.465 - 12988.044: 98.3654% ( 4) 00:08:35.920 12988.044 - 13047.622: 98.3974% ( 4) 00:08:35.920 13047.622 - 13107.200: 98.4295% ( 4) 00:08:35.920 13107.200 - 13166.778: 98.4455% ( 2) 00:08:35.920 13166.778 - 13226.356: 98.4615% ( 2) 00:08:35.920 13345.513 - 13405.091: 98.4696% ( 1) 00:08:35.920 13464.669 - 13524.247: 98.5176% ( 6) 00:08:35.920 13524.247 - 13583.825: 98.5417% ( 3) 00:08:35.920 13583.825 - 13643.404: 98.5978% ( 7) 00:08:35.920 13643.404 - 13702.982: 98.6699% ( 9) 00:08:35.920 13702.982 - 13762.560: 98.7260% ( 7) 00:08:35.920 13762.560 - 13822.138: 98.7660% ( 5) 00:08:35.920 13822.138 - 13881.716: 98.7740% ( 1) 00:08:35.920 13881.716 - 13941.295: 98.7981% ( 3) 00:08:35.920 13941.295 - 14000.873: 98.8221% ( 3) 00:08:35.920 14000.873 - 14060.451: 98.8381% ( 2) 00:08:35.920 14060.451 - 14120.029: 98.8542% ( 2) 00:08:35.920 14120.029 - 14179.607: 98.8702% ( 2) 00:08:35.920 14179.607 - 14239.185: 98.8862% ( 2) 00:08:35.920 14239.185 - 14298.764: 98.9022% ( 2) 00:08:35.920 14298.764 - 14358.342: 98.9183% ( 2) 00:08:35.920 14358.342 - 14417.920: 98.9343% ( 2) 00:08:35.920 14417.920 - 14477.498: 98.9503% ( 2) 00:08:35.920 14477.498 - 14537.076: 98.9663% ( 2) 00:08:35.920 14537.076 - 14596.655: 98.9744% ( 1) 00:08:35.920 23473.804 - 23592.960: 99.0064% ( 4) 00:08:35.920 23592.960 - 23712.116: 99.0385% ( 4) 00:08:35.920 23712.116 - 23831.273: 99.0705% ( 4) 00:08:35.920 23831.273 - 23950.429: 99.1026% ( 4) 00:08:35.920 23950.429 - 24069.585: 99.1346% ( 4) 00:08:35.920 24069.585 - 24188.742: 99.1667% ( 4) 00:08:35.920 24188.742 - 24307.898: 99.1987% ( 4) 00:08:35.920 24307.898 - 24427.055: 99.2228% ( 3) 00:08:35.920 24427.055 - 24546.211: 99.2548% ( 4) 00:08:35.920 24546.211 - 24665.367: 99.2788% ( 3) 00:08:35.920 24665.367 - 24784.524: 99.3189% ( 5) 00:08:35.921 24784.524 - 24903.680: 99.3429% ( 3) 00:08:35.921 24903.680 - 25022.836: 99.3670% ( 3) 00:08:35.921 25022.836 - 25141.993: 99.3990% ( 4) 00:08:35.921 25141.993 - 25261.149: 99.4311% ( 4) 00:08:35.921 25261.149 - 25380.305: 99.4551% ( 3) 00:08:35.921 25380.305 - 25499.462: 99.4872% ( 4) 00:08:35.921 30980.655 - 31218.967: 99.5272% ( 5) 00:08:35.921 31218.967 - 31457.280: 99.6234% ( 12) 00:08:35.921 31457.280 - 31695.593: 99.7035% ( 10) 00:08:35.921 31695.593 - 31933.905: 99.7917% ( 11) 00:08:35.921 31933.905 - 32172.218: 99.8798% ( 11) 00:08:35.921 32172.218 - 32410.531: 99.9599% ( 10) 00:08:35.921 32410.531 - 32648.844: 100.0000% ( 5) 00:08:35.921 00:08:35.921 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:35.921 ============================================================================== 00:08:35.921 Range in us Cumulative IO count 00:08:35.921 4230.051 - 4259.840: 0.1282% ( 16) 00:08:35.921 4259.840 - 4289.629: 0.1442% ( 2) 00:08:35.921 4289.629 - 4319.418: 0.1603% ( 2) 00:08:35.921 4319.418 - 4349.207: 0.1683% ( 1) 00:08:35.921 4349.207 - 4378.996: 0.1843% ( 2) 00:08:35.921 4378.996 - 4408.785: 0.2003% ( 2) 00:08:35.921 4408.785 - 4438.575: 0.2163% ( 2) 00:08:35.921 4438.575 - 4468.364: 0.2324% ( 2) 00:08:35.921 4468.364 - 4498.153: 0.2404% ( 1) 00:08:35.921 4498.153 - 4527.942: 0.2564% ( 2) 00:08:35.921 4527.942 - 4557.731: 0.2724% ( 2) 00:08:35.921 4557.731 - 4587.520: 0.2885% ( 2) 00:08:35.921 4587.520 - 4617.309: 0.3045% ( 2) 00:08:35.921 4617.309 - 4647.098: 0.3285% ( 3) 00:08:35.921 4647.098 - 4676.887: 0.3446% ( 2) 00:08:35.921 4676.887 - 4706.676: 0.3606% ( 2) 00:08:35.921 4706.676 - 4736.465: 0.3766% ( 2) 00:08:35.921 4736.465 - 4766.255: 0.4006% ( 3) 00:08:35.921 4766.255 - 4796.044: 0.4167% ( 2) 00:08:35.921 4796.044 - 4825.833: 0.4327% ( 2) 00:08:35.921 4825.833 - 4855.622: 0.4487% ( 2) 00:08:35.921 4855.622 - 4885.411: 0.4567% ( 1) 00:08:35.921 4885.411 - 4915.200: 0.4728% ( 2) 00:08:35.921 4915.200 - 4944.989: 0.4968% ( 3) 00:08:35.921 4944.989 - 4974.778: 0.5128% ( 2) 00:08:35.921 6851.491 - 6881.280: 0.5769% ( 8) 00:08:35.921 6881.280 - 6911.069: 0.6330% ( 7) 00:08:35.921 6911.069 - 6940.858: 0.6971% ( 8) 00:08:35.921 6940.858 - 6970.647: 0.7372% ( 5) 00:08:35.921 6970.647 - 7000.436: 0.7452% ( 1) 00:08:35.921 7000.436 - 7030.225: 0.7612% ( 2) 00:08:35.921 7030.225 - 7060.015: 0.7772% ( 2) 00:08:35.921 7060.015 - 7089.804: 0.7933% ( 2) 00:08:35.921 7089.804 - 7119.593: 0.8013% ( 1) 00:08:35.921 7119.593 - 7149.382: 0.8173% ( 2) 00:08:35.921 7149.382 - 7179.171: 0.8333% ( 2) 00:08:35.921 7179.171 - 7208.960: 0.8494% ( 2) 00:08:35.921 7208.960 - 7238.749: 0.8654% ( 2) 00:08:35.921 7238.749 - 7268.538: 0.8814% ( 2) 00:08:35.921 7268.538 - 7298.327: 0.8974% ( 2) 00:08:35.921 7298.327 - 7328.116: 0.9135% ( 2) 00:08:35.921 7328.116 - 7357.905: 0.9295% ( 2) 00:08:35.921 7357.905 - 7387.695: 0.9455% ( 2) 00:08:35.921 7387.695 - 7417.484: 0.9615% ( 2) 00:08:35.921 7417.484 - 7447.273: 0.9696% ( 1) 00:08:35.921 7447.273 - 7477.062: 0.9856% ( 2) 00:08:35.921 7477.062 - 7506.851: 1.0016% ( 2) 00:08:35.921 7506.851 - 7536.640: 1.0256% ( 3) 00:08:35.921 8162.211 - 8221.789: 1.0417% ( 2) 00:08:35.921 8221.789 - 8281.367: 1.1298% ( 11) 00:08:35.921 8281.367 - 8340.945: 1.2179% ( 11) 00:08:35.921 8340.945 - 8400.524: 1.4022% ( 23) 00:08:35.921 8400.524 - 8460.102: 1.6587% ( 32) 00:08:35.921 8460.102 - 8519.680: 1.9952% ( 42) 00:08:35.921 8519.680 - 8579.258: 2.4279% ( 54) 00:08:35.921 8579.258 - 8638.836: 2.9647% ( 67) 00:08:35.921 8638.836 - 8698.415: 3.8622% ( 112) 00:08:35.921 8698.415 - 8757.993: 5.0962% ( 154) 00:08:35.921 8757.993 - 8817.571: 6.6506% ( 194) 00:08:35.921 8817.571 - 8877.149: 8.3814% ( 216) 00:08:35.921 8877.149 - 8936.727: 10.1683% ( 223) 00:08:35.921 8936.727 - 8996.305: 12.1474% ( 247) 00:08:35.921 8996.305 - 9055.884: 14.3830% ( 279) 00:08:35.921 9055.884 - 9115.462: 16.5064% ( 265) 00:08:35.921 9115.462 - 9175.040: 18.3574% ( 231) 00:08:35.921 9175.040 - 9234.618: 20.2404% ( 235) 00:08:35.921 9234.618 - 9294.196: 21.9391% ( 212) 00:08:35.921 9294.196 - 9353.775: 23.8622% ( 240) 00:08:35.921 9353.775 - 9413.353: 25.9135% ( 256) 00:08:35.921 9413.353 - 9472.931: 27.8285% ( 239) 00:08:35.921 9472.931 - 9532.509: 29.9679% ( 267) 00:08:35.921 9532.509 - 9592.087: 32.3558% ( 298) 00:08:35.921 9592.087 - 9651.665: 35.0962% ( 342) 00:08:35.921 9651.665 - 9711.244: 38.0689% ( 371) 00:08:35.921 9711.244 - 9770.822: 40.9696% ( 362) 00:08:35.921 9770.822 - 9830.400: 43.8942% ( 365) 00:08:35.921 9830.400 - 9889.978: 46.7869% ( 361) 00:08:35.921 9889.978 - 9949.556: 49.6074% ( 352) 00:08:35.921 9949.556 - 10009.135: 52.4439% ( 354) 00:08:35.921 10009.135 - 10068.713: 54.9760% ( 316) 00:08:35.921 10068.713 - 10128.291: 57.8285% ( 356) 00:08:35.921 10128.291 - 10187.869: 60.8974% ( 383) 00:08:35.921 10187.869 - 10247.447: 63.8381% ( 367) 00:08:35.921 10247.447 - 10307.025: 66.8429% ( 375) 00:08:35.921 10307.025 - 10366.604: 69.4151% ( 321) 00:08:35.921 10366.604 - 10426.182: 71.7388% ( 290) 00:08:35.921 10426.182 - 10485.760: 73.8462% ( 263) 00:08:35.921 10485.760 - 10545.338: 75.8413% ( 249) 00:08:35.921 10545.338 - 10604.916: 77.6923% ( 231) 00:08:35.921 10604.916 - 10664.495: 79.3189% ( 203) 00:08:35.921 10664.495 - 10724.073: 80.6010% ( 160) 00:08:35.921 10724.073 - 10783.651: 81.7147% ( 139) 00:08:35.921 10783.651 - 10843.229: 82.6202% ( 113) 00:08:35.921 10843.229 - 10902.807: 83.2532% ( 79) 00:08:35.921 10902.807 - 10962.385: 83.7580% ( 63) 00:08:35.921 10962.385 - 11021.964: 84.2067% ( 56) 00:08:35.921 11021.964 - 11081.542: 84.6795% ( 59) 00:08:35.921 11081.542 - 11141.120: 85.1923% ( 64) 00:08:35.921 11141.120 - 11200.698: 85.6811% ( 61) 00:08:35.921 11200.698 - 11260.276: 86.2260% ( 68) 00:08:35.921 11260.276 - 11319.855: 86.7869% ( 70) 00:08:35.921 11319.855 - 11379.433: 87.3958% ( 76) 00:08:35.921 11379.433 - 11439.011: 88.1811% ( 98) 00:08:35.921 11439.011 - 11498.589: 88.9423% ( 95) 00:08:35.921 11498.589 - 11558.167: 89.7035% ( 95) 00:08:35.921 11558.167 - 11617.745: 90.4728% ( 96) 00:08:35.921 11617.745 - 11677.324: 91.1779% ( 88) 00:08:35.921 11677.324 - 11736.902: 91.8269% ( 81) 00:08:35.921 11736.902 - 11796.480: 92.5481% ( 90) 00:08:35.921 11796.480 - 11856.058: 93.2692% ( 90) 00:08:35.921 11856.058 - 11915.636: 93.8381% ( 71) 00:08:35.921 11915.636 - 11975.215: 94.4952% ( 82) 00:08:35.921 11975.215 - 12034.793: 95.0721% ( 72) 00:08:35.921 12034.793 - 12094.371: 95.5529% ( 60) 00:08:35.921 12094.371 - 12153.949: 96.0176% ( 58) 00:08:35.921 12153.949 - 12213.527: 96.3862% ( 46) 00:08:35.921 12213.527 - 12273.105: 96.7628% ( 47) 00:08:35.921 12273.105 - 12332.684: 97.1154% ( 44) 00:08:35.921 12332.684 - 12392.262: 97.4038% ( 36) 00:08:35.921 12392.262 - 12451.840: 97.6923% ( 36) 00:08:35.921 12451.840 - 12511.418: 97.9327% ( 30) 00:08:35.921 12511.418 - 12570.996: 98.1170% ( 23) 00:08:35.921 12570.996 - 12630.575: 98.2131% ( 12) 00:08:35.921 12630.575 - 12690.153: 98.2772% ( 8) 00:08:35.921 12690.153 - 12749.731: 98.3413% ( 8) 00:08:35.921 12749.731 - 12809.309: 98.3734% ( 4) 00:08:35.921 12809.309 - 12868.887: 98.3894% ( 2) 00:08:35.921 12868.887 - 12928.465: 98.4054% ( 2) 00:08:35.921 12928.465 - 12988.044: 98.4215% ( 2) 00:08:35.921 12988.044 - 13047.622: 98.4375% ( 2) 00:08:35.921 13047.622 - 13107.200: 98.4615% ( 3) 00:08:35.921 13226.356 - 13285.935: 98.4696% ( 1) 00:08:35.921 13285.935 - 13345.513: 98.4776% ( 1) 00:08:35.921 13405.091 - 13464.669: 98.5256% ( 6) 00:08:35.921 13464.669 - 13524.247: 98.5497% ( 3) 00:08:35.921 13524.247 - 13583.825: 98.5897% ( 5) 00:08:35.921 13583.825 - 13643.404: 98.6378% ( 6) 00:08:35.921 13643.404 - 13702.982: 98.6779% ( 5) 00:08:35.921 13702.982 - 13762.560: 98.7179% ( 5) 00:08:35.921 13762.560 - 13822.138: 98.7580% ( 5) 00:08:35.921 13822.138 - 13881.716: 98.7821% ( 3) 00:08:35.921 13881.716 - 13941.295: 98.8061% ( 3) 00:08:35.921 13941.295 - 14000.873: 98.8221% ( 2) 00:08:35.921 14000.873 - 14060.451: 98.8381% ( 2) 00:08:35.921 14060.451 - 14120.029: 98.8542% ( 2) 00:08:35.921 14120.029 - 14179.607: 98.8782% ( 3) 00:08:35.921 14179.607 - 14239.185: 98.8942% ( 2) 00:08:35.921 14239.185 - 14298.764: 98.9103% ( 2) 00:08:35.921 14298.764 - 14358.342: 98.9263% ( 2) 00:08:35.921 14358.342 - 14417.920: 98.9423% ( 2) 00:08:35.921 14417.920 - 14477.498: 98.9583% ( 2) 00:08:35.921 14477.498 - 14537.076: 98.9744% ( 2) 00:08:35.921 23235.491 - 23354.647: 98.9904% ( 2) 00:08:35.921 23354.647 - 23473.804: 99.0545% ( 8) 00:08:35.921 23473.804 - 23592.960: 99.1106% ( 7) 00:08:35.921 23592.960 - 23712.116: 99.1587% ( 6) 00:08:35.921 23712.116 - 23831.273: 99.1907% ( 4) 00:08:35.921 23831.273 - 23950.429: 99.2308% ( 5) 00:08:35.921 23950.429 - 24069.585: 99.2869% ( 7) 00:08:35.921 24069.585 - 24188.742: 99.3109% ( 3) 00:08:35.921 24188.742 - 24307.898: 99.3269% ( 2) 00:08:35.921 24307.898 - 24427.055: 99.3510% ( 3) 00:08:35.921 24427.055 - 24546.211: 99.3750% ( 3) 00:08:35.921 24546.211 - 24665.367: 99.3990% ( 3) 00:08:35.921 24665.367 - 24784.524: 99.4311% ( 4) 00:08:35.921 24784.524 - 24903.680: 99.4551% ( 3) 00:08:35.921 24903.680 - 25022.836: 99.4872% ( 4) 00:08:35.921 30265.716 - 30384.873: 99.5353% ( 6) 00:08:35.921 30384.873 - 30504.029: 99.5753% ( 5) 00:08:35.921 30504.029 - 30742.342: 99.6635% ( 11) 00:08:35.921 30742.342 - 30980.655: 99.7516% ( 11) 00:08:35.921 30980.655 - 31218.967: 99.8317% ( 10) 00:08:35.921 31218.967 - 31457.280: 99.9199% ( 11) 00:08:35.921 31457.280 - 31695.593: 100.0000% ( 10) 00:08:35.921 00:08:35.921 05:55:58 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:35.921 00:08:35.921 real 0m2.604s 00:08:35.921 user 0m2.223s 00:08:35.921 sys 0m0.260s 00:08:35.921 05:55:58 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.921 ************************************ 00:08:35.921 END TEST nvme_perf 00:08:35.921 05:55:58 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:35.921 ************************************ 00:08:35.921 05:55:58 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:35.921 05:55:58 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:35.921 05:55:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.921 05:55:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.921 ************************************ 00:08:35.921 START TEST nvme_hello_world 00:08:35.921 ************************************ 00:08:35.921 05:55:58 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:36.180 Initializing NVMe Controllers 00:08:36.180 Attached to 0000:00:10.0 00:08:36.180 Namespace ID: 1 size: 6GB 00:08:36.180 Attached to 0000:00:11.0 00:08:36.180 Namespace ID: 1 size: 5GB 00:08:36.180 Attached to 0000:00:13.0 00:08:36.180 Namespace ID: 1 size: 1GB 00:08:36.180 Attached to 0000:00:12.0 00:08:36.180 Namespace ID: 1 size: 4GB 00:08:36.180 Namespace ID: 2 size: 4GB 00:08:36.180 Namespace ID: 3 size: 4GB 00:08:36.180 Initialization complete. 00:08:36.180 INFO: using host memory buffer for IO 00:08:36.180 Hello world! 00:08:36.180 INFO: using host memory buffer for IO 00:08:36.180 Hello world! 00:08:36.180 INFO: using host memory buffer for IO 00:08:36.180 Hello world! 00:08:36.180 INFO: using host memory buffer for IO 00:08:36.180 Hello world! 00:08:36.180 INFO: using host memory buffer for IO 00:08:36.180 Hello world! 00:08:36.180 INFO: using host memory buffer for IO 00:08:36.180 Hello world! 00:08:36.180 00:08:36.180 real 0m0.251s 00:08:36.180 user 0m0.089s 00:08:36.180 sys 0m0.119s 00:08:36.180 05:55:59 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.180 05:55:59 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:36.180 ************************************ 00:08:36.180 END TEST nvme_hello_world 00:08:36.180 ************************************ 00:08:36.180 05:55:59 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:36.180 05:55:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.180 05:55:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.180 05:55:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.180 ************************************ 00:08:36.180 START TEST nvme_sgl 00:08:36.180 ************************************ 00:08:36.180 05:55:59 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:36.439 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:36.439 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:36.439 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:36.439 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:36.439 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:36.439 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:36.439 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:36.439 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:36.439 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:36.439 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:36.439 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:36.439 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:36.439 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:36.439 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:36.439 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:36.439 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:36.440 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:36.440 NVMe Readv/Writev Request test 00:08:36.440 Attached to 0000:00:10.0 00:08:36.440 Attached to 0000:00:11.0 00:08:36.440 Attached to 0000:00:13.0 00:08:36.440 Attached to 0000:00:12.0 00:08:36.440 0000:00:10.0: build_io_request_2 test passed 00:08:36.440 0000:00:10.0: build_io_request_4 test passed 00:08:36.440 0000:00:10.0: build_io_request_5 test passed 00:08:36.440 0000:00:10.0: build_io_request_6 test passed 00:08:36.440 0000:00:10.0: build_io_request_7 test passed 00:08:36.440 0000:00:10.0: build_io_request_10 test passed 00:08:36.440 0000:00:11.0: build_io_request_2 test passed 00:08:36.440 0000:00:11.0: build_io_request_4 test passed 00:08:36.440 0000:00:11.0: build_io_request_5 test passed 00:08:36.440 0000:00:11.0: build_io_request_6 test passed 00:08:36.440 0000:00:11.0: build_io_request_7 test passed 00:08:36.440 0000:00:11.0: build_io_request_10 test passed 00:08:36.440 Cleaning up... 00:08:36.440 00:08:36.440 real 0m0.332s 00:08:36.440 user 0m0.168s 00:08:36.440 sys 0m0.113s 00:08:36.440 05:55:59 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.440 05:55:59 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:36.440 ************************************ 00:08:36.440 END TEST nvme_sgl 00:08:36.440 ************************************ 00:08:36.440 05:55:59 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:36.440 05:55:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.440 05:55:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.440 05:55:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.698 ************************************ 00:08:36.698 START TEST nvme_e2edp 00:08:36.698 ************************************ 00:08:36.698 05:55:59 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:36.698 NVMe Write/Read with End-to-End data protection test 00:08:36.698 Attached to 0000:00:10.0 00:08:36.698 Attached to 0000:00:11.0 00:08:36.698 Attached to 0000:00:13.0 00:08:36.698 Attached to 0000:00:12.0 00:08:36.698 Cleaning up... 00:08:36.698 00:08:36.698 real 0m0.249s 00:08:36.698 user 0m0.099s 00:08:36.698 sys 0m0.107s 00:08:36.698 05:55:59 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.699 05:55:59 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:36.699 ************************************ 00:08:36.699 END TEST nvme_e2edp 00:08:36.699 ************************************ 00:08:36.957 05:55:59 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:36.957 05:55:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.957 05:55:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.957 05:55:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.957 ************************************ 00:08:36.957 START TEST nvme_reserve 00:08:36.957 ************************************ 00:08:36.957 05:55:59 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:37.215 ===================================================== 00:08:37.215 NVMe Controller at PCI bus 0, device 16, function 0 00:08:37.215 ===================================================== 00:08:37.215 Reservations: Not Supported 00:08:37.215 ===================================================== 00:08:37.215 NVMe Controller at PCI bus 0, device 17, function 0 00:08:37.215 ===================================================== 00:08:37.215 Reservations: Not Supported 00:08:37.215 ===================================================== 00:08:37.215 NVMe Controller at PCI bus 0, device 19, function 0 00:08:37.215 ===================================================== 00:08:37.215 Reservations: Not Supported 00:08:37.216 ===================================================== 00:08:37.216 NVMe Controller at PCI bus 0, device 18, function 0 00:08:37.216 ===================================================== 00:08:37.216 Reservations: Not Supported 00:08:37.216 Reservation test passed 00:08:37.216 00:08:37.216 real 0m0.242s 00:08:37.216 user 0m0.079s 00:08:37.216 sys 0m0.121s 00:08:37.216 05:56:00 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.216 05:56:00 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 ************************************ 00:08:37.216 END TEST nvme_reserve 00:08:37.216 ************************************ 00:08:37.216 05:56:00 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:37.216 05:56:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.216 05:56:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.216 05:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 ************************************ 00:08:37.216 START TEST nvme_err_injection 00:08:37.216 ************************************ 00:08:37.216 05:56:00 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:37.474 NVMe Error Injection test 00:08:37.474 Attached to 0000:00:10.0 00:08:37.474 Attached to 0000:00:11.0 00:08:37.474 Attached to 0000:00:13.0 00:08:37.474 Attached to 0000:00:12.0 00:08:37.474 0000:00:13.0: get features failed as expected 00:08:37.474 0000:00:12.0: get features failed as expected 00:08:37.474 0000:00:10.0: get features failed as expected 00:08:37.474 0000:00:11.0: get features failed as expected 00:08:37.474 0000:00:13.0: get features successfully as expected 00:08:37.474 0000:00:12.0: get features successfully as expected 00:08:37.474 0000:00:10.0: get features successfully as expected 00:08:37.474 0000:00:11.0: get features successfully as expected 00:08:37.474 0000:00:10.0: read failed as expected 00:08:37.474 0000:00:11.0: read failed as expected 00:08:37.474 0000:00:13.0: read failed as expected 00:08:37.474 0000:00:12.0: read failed as expected 00:08:37.474 0000:00:10.0: read successfully as expected 00:08:37.474 0000:00:11.0: read successfully as expected 00:08:37.474 0000:00:13.0: read successfully as expected 00:08:37.474 0000:00:12.0: read successfully as expected 00:08:37.474 Cleaning up... 00:08:37.474 00:08:37.474 real 0m0.274s 00:08:37.474 user 0m0.098s 00:08:37.474 sys 0m0.127s 00:08:37.474 05:56:00 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.474 05:56:00 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:37.474 ************************************ 00:08:37.474 END TEST nvme_err_injection 00:08:37.474 ************************************ 00:08:37.474 05:56:00 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:37.474 05:56:00 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:37.474 05:56:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.474 05:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.474 ************************************ 00:08:37.474 START TEST nvme_overhead 00:08:37.474 ************************************ 00:08:37.474 05:56:00 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:38.849 Initializing NVMe Controllers 00:08:38.849 Attached to 0000:00:10.0 00:08:38.849 Attached to 0000:00:11.0 00:08:38.849 Attached to 0000:00:13.0 00:08:38.849 Attached to 0000:00:12.0 00:08:38.849 Initialization complete. Launching workers. 00:08:38.849 submit (in ns) avg, min, max = 17045.2, 13062.3, 97347.7 00:08:38.849 complete (in ns) avg, min, max = 11735.9, 9018.2, 518939.5 00:08:38.849 00:08:38.849 Submit histogram 00:08:38.849 ================ 00:08:38.849 Range in us Cumulative Count 00:08:38.849 13.033 - 13.091: 0.0103% ( 1) 00:08:38.849 13.324 - 13.382: 0.0207% ( 1) 00:08:38.849 13.382 - 13.440: 0.0310% ( 1) 00:08:38.849 13.440 - 13.498: 0.0517% ( 2) 00:08:38.849 13.498 - 13.556: 0.0724% ( 2) 00:08:38.849 13.556 - 13.615: 0.1552% ( 8) 00:08:38.849 13.615 - 13.673: 0.3001% ( 14) 00:08:38.849 13.673 - 13.731: 0.5484% ( 24) 00:08:38.849 13.731 - 13.789: 0.8381% ( 28) 00:08:38.849 13.789 - 13.847: 1.0554% ( 21) 00:08:38.849 13.847 - 13.905: 1.3037% ( 24) 00:08:38.849 13.905 - 13.964: 1.4485% ( 14) 00:08:38.849 13.964 - 14.022: 1.7279% ( 27) 00:08:38.849 14.022 - 14.080: 2.2659% ( 52) 00:08:38.849 14.080 - 14.138: 3.1454% ( 85) 00:08:38.849 14.138 - 14.196: 4.0766% ( 90) 00:08:38.849 14.196 - 14.255: 4.7801% ( 68) 00:08:38.849 14.255 - 14.313: 5.4630% ( 66) 00:08:38.849 14.313 - 14.371: 5.9493% ( 47) 00:08:38.849 14.371 - 14.429: 6.4252% ( 46) 00:08:38.849 14.429 - 14.487: 7.1392% ( 69) 00:08:38.849 14.487 - 14.545: 7.9669% ( 80) 00:08:38.849 14.545 - 14.604: 8.6601% ( 67) 00:08:38.849 14.604 - 14.662: 9.2395% ( 56) 00:08:38.849 14.662 - 14.720: 9.8603% ( 60) 00:08:38.849 14.720 - 14.778: 10.4604% ( 58) 00:08:38.849 14.778 - 14.836: 11.4227% ( 93) 00:08:38.849 14.836 - 14.895: 13.8541% ( 235) 00:08:38.849 14.895 - 15.011: 23.8903% ( 970) 00:08:38.849 15.011 - 15.127: 37.3513% ( 1301) 00:08:38.849 15.127 - 15.244: 46.5184% ( 886) 00:08:38.849 15.244 - 15.360: 52.1055% ( 540) 00:08:38.849 15.360 - 15.476: 54.7646% ( 257) 00:08:38.849 15.476 - 15.593: 56.7098% ( 188) 00:08:38.849 15.593 - 15.709: 58.5204% ( 175) 00:08:38.849 15.709 - 15.825: 59.8551% ( 129) 00:08:38.849 15.825 - 15.942: 60.6622% ( 78) 00:08:38.849 15.942 - 16.058: 61.1692% ( 49) 00:08:38.849 16.058 - 16.175: 61.5313% ( 35) 00:08:38.849 16.175 - 16.291: 61.7589% ( 22) 00:08:38.849 16.291 - 16.407: 61.8417% ( 8) 00:08:38.849 16.407 - 16.524: 61.9659% ( 12) 00:08:38.849 16.524 - 16.640: 62.0072% ( 4) 00:08:38.849 16.640 - 16.756: 62.1004% ( 9) 00:08:38.849 16.756 - 16.873: 62.2556% ( 15) 00:08:38.849 16.873 - 16.989: 62.3487% ( 9) 00:08:38.849 16.989 - 17.105: 62.4418% ( 9) 00:08:38.849 17.105 - 17.222: 62.4935% ( 5) 00:08:38.849 17.222 - 17.338: 62.5142% ( 2) 00:08:38.849 17.338 - 17.455: 62.5349% ( 2) 00:08:38.849 17.455 - 17.571: 62.5763% ( 4) 00:08:38.849 17.571 - 17.687: 62.8557% ( 27) 00:08:38.849 17.687 - 17.804: 64.5111% ( 160) 00:08:38.849 17.804 - 17.920: 69.3740% ( 470) 00:08:38.849 17.920 - 18.036: 76.2545% ( 665) 00:08:38.849 18.036 - 18.153: 81.6555% ( 522) 00:08:38.849 18.153 - 18.269: 83.6937% ( 197) 00:08:38.849 18.269 - 18.385: 84.5732% ( 85) 00:08:38.849 18.385 - 18.502: 85.2768% ( 68) 00:08:38.849 18.502 - 18.618: 86.3114% ( 100) 00:08:38.849 18.618 - 18.735: 87.1702% ( 83) 00:08:38.849 18.735 - 18.851: 87.8634% ( 67) 00:08:38.849 18.851 - 18.967: 88.2566% ( 38) 00:08:38.849 18.967 - 19.084: 88.5360% ( 27) 00:08:38.849 19.084 - 19.200: 88.8360% ( 29) 00:08:38.849 19.200 - 19.316: 88.9291% ( 9) 00:08:38.849 19.316 - 19.433: 89.0740% ( 14) 00:08:38.849 19.433 - 19.549: 89.1568% ( 8) 00:08:38.849 19.549 - 19.665: 89.3533% ( 19) 00:08:38.849 19.665 - 19.782: 89.5499% ( 19) 00:08:38.849 19.782 - 19.898: 89.7051% ( 15) 00:08:38.849 19.898 - 20.015: 89.8707% ( 16) 00:08:38.849 20.015 - 20.131: 90.0673% ( 19) 00:08:38.849 20.131 - 20.247: 90.2018% ( 13) 00:08:38.849 20.247 - 20.364: 90.3052% ( 10) 00:08:38.849 20.364 - 20.480: 90.4190% ( 11) 00:08:38.849 20.480 - 20.596: 90.5742% ( 15) 00:08:38.849 20.596 - 20.713: 90.6570% ( 8) 00:08:38.849 20.713 - 20.829: 90.7605% ( 10) 00:08:38.849 20.829 - 20.945: 90.8743% ( 11) 00:08:38.850 20.945 - 21.062: 91.0398% ( 16) 00:08:38.850 21.062 - 21.178: 91.1640% ( 12) 00:08:38.850 21.178 - 21.295: 91.2882% ( 12) 00:08:38.850 21.295 - 21.411: 91.3916% ( 10) 00:08:38.850 21.411 - 21.527: 91.5572% ( 16) 00:08:38.850 21.527 - 21.644: 91.6192% ( 6) 00:08:38.850 21.644 - 21.760: 91.6606% ( 4) 00:08:38.850 21.760 - 21.876: 91.7641% ( 10) 00:08:38.850 21.876 - 21.993: 91.8986% ( 13) 00:08:38.850 21.993 - 22.109: 92.0021% ( 10) 00:08:38.850 22.109 - 22.225: 92.1676% ( 16) 00:08:38.850 22.225 - 22.342: 92.3125% ( 14) 00:08:38.850 22.342 - 22.458: 92.4573% ( 14) 00:08:38.850 22.458 - 22.575: 92.5297% ( 7) 00:08:38.850 22.575 - 22.691: 92.6022% ( 7) 00:08:38.850 22.691 - 22.807: 92.6953% ( 9) 00:08:38.850 22.807 - 22.924: 92.7470% ( 5) 00:08:38.850 22.924 - 23.040: 92.8195% ( 7) 00:08:38.850 23.040 - 23.156: 92.8919% ( 7) 00:08:38.850 23.156 - 23.273: 92.9850% ( 9) 00:08:38.850 23.273 - 23.389: 93.0781% ( 9) 00:08:38.850 23.389 - 23.505: 93.1402% ( 6) 00:08:38.850 23.505 - 23.622: 93.2540% ( 11) 00:08:38.850 23.622 - 23.738: 93.3678% ( 11) 00:08:38.850 23.738 - 23.855: 93.4816% ( 11) 00:08:38.850 23.855 - 23.971: 93.6265% ( 14) 00:08:38.850 23.971 - 24.087: 93.7196% ( 9) 00:08:38.850 24.087 - 24.204: 93.7713% ( 5) 00:08:38.850 24.204 - 24.320: 93.9472% ( 17) 00:08:38.850 24.320 - 24.436: 94.0714% ( 12) 00:08:38.850 24.436 - 24.553: 94.1231% ( 5) 00:08:38.850 24.553 - 24.669: 94.1852% ( 6) 00:08:38.850 24.669 - 24.785: 94.2990% ( 11) 00:08:38.850 24.785 - 24.902: 94.4128% ( 11) 00:08:38.850 24.902 - 25.018: 94.5370% ( 12) 00:08:38.850 25.018 - 25.135: 94.6301% ( 9) 00:08:38.850 25.135 - 25.251: 94.8163% ( 18) 00:08:38.850 25.251 - 25.367: 94.9302% ( 11) 00:08:38.850 25.367 - 25.484: 95.0233% ( 9) 00:08:38.850 25.484 - 25.600: 95.0957% ( 7) 00:08:38.850 25.600 - 25.716: 95.2095% ( 11) 00:08:38.850 25.716 - 25.833: 95.2716% ( 6) 00:08:38.850 25.833 - 25.949: 95.3544% ( 8) 00:08:38.850 25.949 - 26.065: 95.4165% ( 6) 00:08:38.850 26.065 - 26.182: 95.4682% ( 5) 00:08:38.850 26.182 - 26.298: 95.5510% ( 8) 00:08:38.850 26.298 - 26.415: 95.6441% ( 9) 00:08:38.850 26.415 - 26.531: 95.7579% ( 11) 00:08:38.850 26.531 - 26.647: 95.7993% ( 4) 00:08:38.850 26.647 - 26.764: 95.9131% ( 11) 00:08:38.850 26.764 - 26.880: 96.0062% ( 9) 00:08:38.850 26.880 - 26.996: 96.0476% ( 4) 00:08:38.850 26.996 - 27.113: 96.1304% ( 8) 00:08:38.850 27.113 - 27.229: 96.1614% ( 3) 00:08:38.850 27.229 - 27.345: 96.2649% ( 10) 00:08:38.850 27.345 - 27.462: 96.3270% ( 6) 00:08:38.850 27.462 - 27.578: 96.3580% ( 3) 00:08:38.850 27.578 - 27.695: 96.4201% ( 6) 00:08:38.850 27.695 - 27.811: 96.4408% ( 2) 00:08:38.850 27.811 - 27.927: 96.5132% ( 7) 00:08:38.850 27.927 - 28.044: 96.5753% ( 6) 00:08:38.850 28.044 - 28.160: 96.6684% ( 9) 00:08:38.850 28.160 - 28.276: 96.7615% ( 9) 00:08:38.850 28.276 - 28.393: 96.8339% ( 7) 00:08:38.850 28.393 - 28.509: 96.9167% ( 8) 00:08:38.850 28.509 - 28.625: 96.9788% ( 6) 00:08:38.850 28.625 - 28.742: 97.0719% ( 9) 00:08:38.850 28.742 - 28.858: 97.2271% ( 15) 00:08:38.850 28.858 - 28.975: 97.4340% ( 20) 00:08:38.850 28.975 - 29.091: 97.7031% ( 26) 00:08:38.850 29.091 - 29.207: 97.8583% ( 15) 00:08:38.850 29.207 - 29.324: 98.0445% ( 18) 00:08:38.850 29.324 - 29.440: 98.1997% ( 15) 00:08:38.850 29.440 - 29.556: 98.2928% ( 9) 00:08:38.850 29.556 - 29.673: 98.4480% ( 15) 00:08:38.850 29.673 - 29.789: 98.5929% ( 14) 00:08:38.850 29.789 - 30.022: 98.7688% ( 17) 00:08:38.850 30.022 - 30.255: 98.9136% ( 14) 00:08:38.850 30.255 - 30.487: 98.9757% ( 6) 00:08:38.850 30.487 - 30.720: 99.0378% ( 6) 00:08:38.850 30.720 - 30.953: 99.1205% ( 8) 00:08:38.850 30.953 - 31.185: 99.1412% ( 2) 00:08:38.850 31.185 - 31.418: 99.1826% ( 4) 00:08:38.850 31.418 - 31.651: 99.2137% ( 3) 00:08:38.850 31.651 - 31.884: 99.2447% ( 3) 00:08:38.850 31.884 - 32.116: 99.2654% ( 2) 00:08:38.850 32.116 - 32.349: 99.2964% ( 3) 00:08:38.850 32.349 - 32.582: 99.3068% ( 1) 00:08:38.850 32.582 - 32.815: 99.3171% ( 1) 00:08:38.850 32.815 - 33.047: 99.3378% ( 2) 00:08:38.850 33.280 - 33.513: 99.3689% ( 3) 00:08:38.850 33.513 - 33.745: 99.3792% ( 1) 00:08:38.850 33.745 - 33.978: 99.3895% ( 1) 00:08:38.850 33.978 - 34.211: 99.4206% ( 3) 00:08:38.850 34.211 - 34.444: 99.4309% ( 1) 00:08:38.850 34.676 - 34.909: 99.4516% ( 2) 00:08:38.850 35.142 - 35.375: 99.4930% ( 4) 00:08:38.850 35.375 - 35.607: 99.5654% ( 7) 00:08:38.850 36.073 - 36.305: 99.6068% ( 4) 00:08:38.850 36.305 - 36.538: 99.6172% ( 1) 00:08:38.850 37.004 - 37.236: 99.6379% ( 2) 00:08:38.850 37.236 - 37.469: 99.6586% ( 2) 00:08:38.850 37.469 - 37.702: 99.6689% ( 1) 00:08:38.850 39.098 - 39.331: 99.6896% ( 2) 00:08:38.850 39.331 - 39.564: 99.6999% ( 1) 00:08:38.850 39.564 - 39.796: 99.7206% ( 2) 00:08:38.850 39.796 - 40.029: 99.7310% ( 1) 00:08:38.850 40.262 - 40.495: 99.7413% ( 1) 00:08:38.850 40.495 - 40.727: 99.7517% ( 1) 00:08:38.850 40.727 - 40.960: 99.7620% ( 1) 00:08:38.850 41.891 - 42.124: 99.7827% ( 2) 00:08:38.850 42.589 - 42.822: 99.8034% ( 2) 00:08:38.850 42.822 - 43.055: 99.8138% ( 1) 00:08:38.850 43.520 - 43.753: 99.8345% ( 2) 00:08:38.850 43.985 - 44.218: 99.8448% ( 1) 00:08:38.850 44.218 - 44.451: 99.8551% ( 1) 00:08:38.850 44.451 - 44.684: 99.8758% ( 2) 00:08:38.850 44.684 - 44.916: 99.8862% ( 1) 00:08:38.850 45.149 - 45.382: 99.9172% ( 3) 00:08:38.850 45.615 - 45.847: 99.9276% ( 1) 00:08:38.850 46.313 - 46.545: 99.9379% ( 1) 00:08:38.850 47.244 - 47.476: 99.9483% ( 1) 00:08:38.850 49.338 - 49.571: 99.9586% ( 1) 00:08:38.850 52.364 - 52.596: 99.9690% ( 1) 00:08:38.850 53.760 - 53.993: 99.9793% ( 1) 00:08:38.850 60.044 - 60.509: 99.9897% ( 1) 00:08:38.850 97.280 - 97.745: 100.0000% ( 1) 00:08:38.850 00:08:38.850 Complete histogram 00:08:38.850 ================== 00:08:38.850 Range in us Cumulative Count 00:08:38.850 9.018 - 9.076: 0.0310% ( 3) 00:08:38.850 9.076 - 9.135: 0.1035% ( 7) 00:08:38.850 9.135 - 9.193: 0.2690% ( 16) 00:08:38.850 9.193 - 9.251: 0.5380% ( 26) 00:08:38.850 9.251 - 9.309: 0.7553% ( 21) 00:08:38.850 9.309 - 9.367: 1.1071% ( 34) 00:08:38.850 9.367 - 9.425: 1.5623% ( 44) 00:08:38.850 9.425 - 9.484: 2.2245% ( 64) 00:08:38.850 9.484 - 9.542: 3.0109% ( 76) 00:08:38.850 9.542 - 9.600: 3.8800% ( 84) 00:08:38.850 9.600 - 9.658: 4.7077% ( 80) 00:08:38.850 9.658 - 9.716: 5.5665% ( 83) 00:08:38.850 9.716 - 9.775: 6.5080% ( 91) 00:08:38.850 9.775 - 9.833: 7.2116% ( 68) 00:08:38.850 9.833 - 9.891: 8.1221% ( 88) 00:08:38.850 9.891 - 9.949: 9.7051% ( 153) 00:08:38.850 9.949 - 10.007: 13.3264% ( 350) 00:08:38.850 10.007 - 10.065: 19.4723% ( 594) 00:08:38.850 10.065 - 10.124: 27.1392% ( 741) 00:08:38.850 10.124 - 10.182: 35.2302% ( 782) 00:08:38.850 10.182 - 10.240: 42.4315% ( 696) 00:08:38.850 10.240 - 10.298: 48.5773% ( 594) 00:08:38.850 10.298 - 10.356: 53.0160% ( 429) 00:08:38.850 10.356 - 10.415: 55.3337% ( 224) 00:08:38.850 10.415 - 10.473: 56.5753% ( 120) 00:08:38.850 10.473 - 10.531: 57.2581% ( 66) 00:08:38.850 10.531 - 10.589: 57.7134% ( 44) 00:08:38.850 10.589 - 10.647: 58.0135% ( 29) 00:08:38.850 10.647 - 10.705: 58.2411% ( 22) 00:08:38.850 10.705 - 10.764: 58.4790% ( 23) 00:08:38.850 10.764 - 10.822: 58.7067% ( 22) 00:08:38.850 10.822 - 10.880: 58.9550% ( 24) 00:08:38.850 10.880 - 10.938: 59.2240% ( 26) 00:08:38.850 10.938 - 10.996: 59.7206% ( 48) 00:08:38.850 10.996 - 11.055: 60.2587% ( 52) 00:08:38.850 11.055 - 11.113: 60.7553% ( 48) 00:08:38.850 11.113 - 11.171: 61.2106% ( 44) 00:08:38.850 11.171 - 11.229: 61.7900% ( 56) 00:08:38.850 11.229 - 11.287: 62.3176% ( 51) 00:08:38.850 11.287 - 11.345: 62.5970% ( 27) 00:08:38.850 11.345 - 11.404: 62.7625% ( 16) 00:08:38.850 11.404 - 11.462: 62.9074% ( 14) 00:08:38.850 11.462 - 11.520: 63.0005% ( 9) 00:08:38.850 11.520 - 11.578: 63.0419% ( 4) 00:08:38.850 11.578 - 11.636: 63.0936% ( 5) 00:08:38.850 11.636 - 11.695: 63.1454% ( 5) 00:08:38.850 11.695 - 11.753: 63.1868% ( 4) 00:08:38.850 11.753 - 11.811: 63.2695% ( 8) 00:08:38.850 11.811 - 11.869: 63.3109% ( 4) 00:08:38.850 11.869 - 11.927: 63.3523% ( 4) 00:08:38.850 11.927 - 11.985: 63.7144% ( 35) 00:08:38.850 11.985 - 12.044: 64.4801% ( 74) 00:08:38.850 12.044 - 12.102: 65.7424% ( 122) 00:08:38.850 12.102 - 12.160: 67.7910% ( 198) 00:08:38.850 12.160 - 12.218: 71.1226% ( 322) 00:08:38.851 12.218 - 12.276: 75.2716% ( 401) 00:08:38.851 12.276 - 12.335: 78.9240% ( 353) 00:08:38.851 12.335 - 12.393: 81.8831% ( 286) 00:08:38.851 12.393 - 12.451: 84.7491% ( 277) 00:08:38.851 12.451 - 12.509: 86.2494% ( 145) 00:08:38.851 12.509 - 12.567: 87.0667% ( 79) 00:08:38.851 12.567 - 12.625: 87.7186% ( 63) 00:08:38.851 12.625 - 12.684: 88.0497% ( 32) 00:08:38.851 12.684 - 12.742: 88.3290% ( 27) 00:08:38.851 12.742 - 12.800: 88.5877% ( 25) 00:08:38.851 12.800 - 12.858: 88.7532% ( 16) 00:08:38.851 12.858 - 12.916: 88.8464% ( 9) 00:08:38.851 12.916 - 12.975: 88.9809% ( 13) 00:08:38.851 12.975 - 13.033: 89.2395% ( 25) 00:08:38.851 13.033 - 13.091: 89.3947% ( 15) 00:08:38.851 13.091 - 13.149: 89.5292% ( 13) 00:08:38.851 13.149 - 13.207: 89.7051% ( 17) 00:08:38.851 13.207 - 13.265: 89.9845% ( 27) 00:08:38.851 13.265 - 13.324: 90.3466% ( 35) 00:08:38.851 13.324 - 13.382: 90.7191% ( 36) 00:08:38.851 13.382 - 13.440: 91.0191% ( 29) 00:08:38.851 13.440 - 13.498: 91.3192% ( 29) 00:08:38.851 13.498 - 13.556: 91.6503% ( 32) 00:08:38.851 13.556 - 13.615: 91.7848% ( 13) 00:08:38.851 13.615 - 13.673: 91.8365% ( 5) 00:08:38.851 13.673 - 13.731: 91.8676% ( 3) 00:08:38.851 13.731 - 13.789: 91.8883% ( 2) 00:08:38.851 13.789 - 13.847: 91.9503% ( 6) 00:08:38.851 13.847 - 13.905: 92.0124% ( 6) 00:08:38.851 13.964 - 14.022: 92.0538% ( 4) 00:08:38.851 14.022 - 14.080: 92.0952% ( 4) 00:08:38.851 14.138 - 14.196: 92.1573% ( 6) 00:08:38.851 14.255 - 14.313: 92.1676% ( 1) 00:08:38.851 14.313 - 14.371: 92.1987% ( 3) 00:08:38.851 14.371 - 14.429: 92.2193% ( 2) 00:08:38.851 14.429 - 14.487: 92.2504% ( 3) 00:08:38.851 14.545 - 14.604: 92.2607% ( 1) 00:08:38.851 14.604 - 14.662: 92.2711% ( 1) 00:08:38.851 14.662 - 14.720: 92.2918% ( 2) 00:08:38.851 14.720 - 14.778: 92.3435% ( 5) 00:08:38.851 14.778 - 14.836: 92.3849% ( 4) 00:08:38.851 14.836 - 14.895: 92.4573% ( 7) 00:08:38.851 14.895 - 15.011: 92.6022% ( 14) 00:08:38.851 15.011 - 15.127: 92.7263% ( 12) 00:08:38.851 15.127 - 15.244: 92.7470% ( 2) 00:08:38.851 15.244 - 15.360: 92.8815% ( 13) 00:08:38.851 15.360 - 15.476: 92.9540% ( 7) 00:08:38.851 15.476 - 15.593: 92.9953% ( 4) 00:08:38.851 15.593 - 15.709: 93.0885% ( 9) 00:08:38.851 15.709 - 15.825: 93.1298% ( 4) 00:08:38.851 15.825 - 15.942: 93.1919% ( 6) 00:08:38.851 15.942 - 16.058: 93.2437% ( 5) 00:08:38.851 16.058 - 16.175: 93.2747% ( 3) 00:08:38.851 16.175 - 16.291: 93.3057% ( 3) 00:08:38.851 16.291 - 16.407: 93.3575% ( 5) 00:08:38.851 16.407 - 16.524: 93.4092% ( 5) 00:08:38.851 16.524 - 16.640: 93.4920% ( 8) 00:08:38.851 16.640 - 16.756: 93.5023% ( 1) 00:08:38.851 16.756 - 16.873: 93.5541% ( 5) 00:08:38.851 16.873 - 16.989: 93.6368% ( 8) 00:08:38.851 16.989 - 17.105: 93.7093% ( 7) 00:08:38.851 17.105 - 17.222: 93.7713% ( 6) 00:08:38.851 17.222 - 17.338: 93.8438% ( 7) 00:08:38.851 17.338 - 17.455: 93.8955% ( 5) 00:08:38.851 17.455 - 17.571: 93.9679% ( 7) 00:08:38.851 17.571 - 17.687: 93.9990% ( 3) 00:08:38.851 17.687 - 17.804: 94.1024% ( 10) 00:08:38.851 17.804 - 17.920: 94.1645% ( 6) 00:08:38.851 17.920 - 18.036: 94.2369% ( 7) 00:08:38.851 18.036 - 18.153: 94.3301% ( 9) 00:08:38.851 18.153 - 18.269: 94.4025% ( 7) 00:08:38.851 18.269 - 18.385: 94.4853% ( 8) 00:08:38.851 18.385 - 18.502: 94.5887% ( 10) 00:08:38.851 18.502 - 18.618: 94.6508% ( 6) 00:08:38.851 18.618 - 18.735: 94.7439% ( 9) 00:08:38.851 18.735 - 18.851: 94.8474% ( 10) 00:08:38.851 18.851 - 18.967: 94.9095% ( 6) 00:08:38.851 18.967 - 19.084: 94.9509% ( 4) 00:08:38.851 19.084 - 19.200: 95.0336% ( 8) 00:08:38.851 19.200 - 19.316: 95.1681% ( 13) 00:08:38.851 19.316 - 19.433: 95.2095% ( 4) 00:08:38.851 19.433 - 19.549: 95.3026% ( 9) 00:08:38.851 19.549 - 19.665: 95.4165% ( 11) 00:08:38.851 19.665 - 19.782: 95.4578% ( 4) 00:08:38.851 19.782 - 19.898: 95.4889% ( 3) 00:08:38.851 19.898 - 20.015: 95.5510% ( 6) 00:08:38.851 20.015 - 20.131: 95.6027% ( 5) 00:08:38.851 20.131 - 20.247: 95.6441% ( 4) 00:08:38.851 20.247 - 20.364: 95.7372% ( 9) 00:08:38.851 20.364 - 20.480: 95.8096% ( 7) 00:08:38.851 20.480 - 20.596: 95.8820% ( 7) 00:08:38.851 20.596 - 20.713: 95.9959% ( 11) 00:08:38.851 20.713 - 20.829: 96.0579% ( 6) 00:08:38.851 20.829 - 20.945: 96.1097% ( 5) 00:08:38.851 20.945 - 21.062: 96.1821% ( 7) 00:08:38.851 21.062 - 21.178: 96.2545% ( 7) 00:08:38.851 21.178 - 21.295: 96.3373% ( 8) 00:08:38.851 21.295 - 21.411: 96.3890% ( 5) 00:08:38.851 21.411 - 21.527: 96.4718% ( 8) 00:08:38.851 21.527 - 21.644: 96.5546% ( 8) 00:08:38.851 21.644 - 21.760: 96.6063% ( 5) 00:08:38.851 21.760 - 21.876: 96.6270% ( 2) 00:08:38.851 21.876 - 21.993: 96.7098% ( 8) 00:08:38.851 21.993 - 22.109: 96.7512% ( 4) 00:08:38.851 22.109 - 22.225: 96.7926% ( 4) 00:08:38.851 22.225 - 22.342: 96.8339% ( 4) 00:08:38.851 22.342 - 22.458: 96.8960% ( 6) 00:08:38.851 22.458 - 22.575: 96.9374% ( 4) 00:08:38.851 22.575 - 22.691: 97.0202% ( 8) 00:08:38.851 22.691 - 22.807: 97.1340% ( 11) 00:08:38.851 22.807 - 22.924: 97.1754% ( 4) 00:08:38.851 22.924 - 23.040: 97.2168% ( 4) 00:08:38.851 23.040 - 23.156: 97.2478% ( 3) 00:08:38.851 23.156 - 23.273: 97.2788% ( 3) 00:08:38.851 23.273 - 23.389: 97.3306% ( 5) 00:08:38.851 23.389 - 23.505: 97.4030% ( 7) 00:08:38.851 23.505 - 23.622: 97.4547% ( 5) 00:08:38.851 23.622 - 23.738: 97.4858% ( 3) 00:08:38.851 23.738 - 23.855: 97.5892% ( 10) 00:08:38.851 23.855 - 23.971: 97.7651% ( 17) 00:08:38.851 23.971 - 24.087: 97.9100% ( 14) 00:08:38.851 24.087 - 24.204: 98.0859% ( 17) 00:08:38.851 24.204 - 24.320: 98.2307% ( 14) 00:08:38.851 24.320 - 24.436: 98.4584% ( 22) 00:08:38.851 24.436 - 24.553: 98.6756% ( 21) 00:08:38.851 24.553 - 24.669: 98.8515% ( 17) 00:08:38.851 24.669 - 24.785: 98.9653% ( 11) 00:08:38.851 24.785 - 24.902: 99.1412% ( 17) 00:08:38.851 24.902 - 25.018: 99.2137% ( 7) 00:08:38.851 25.018 - 25.135: 99.2757% ( 6) 00:08:38.851 25.135 - 25.251: 99.3068% ( 3) 00:08:38.851 25.251 - 25.367: 99.3171% ( 1) 00:08:38.851 25.367 - 25.484: 99.3482% ( 3) 00:08:38.851 25.484 - 25.600: 99.4206% ( 7) 00:08:38.851 25.600 - 25.716: 99.4723% ( 5) 00:08:38.851 25.716 - 25.833: 99.5137% ( 4) 00:08:38.851 25.833 - 25.949: 99.5241% ( 1) 00:08:38.851 25.949 - 26.065: 99.5447% ( 2) 00:08:38.851 26.065 - 26.182: 99.5758% ( 3) 00:08:38.851 26.182 - 26.298: 99.6172% ( 4) 00:08:38.851 26.764 - 26.880: 99.6275% ( 1) 00:08:38.851 26.996 - 27.113: 99.6379% ( 1) 00:08:38.851 27.345 - 27.462: 99.6482% ( 1) 00:08:38.851 27.811 - 27.927: 99.6689% ( 2) 00:08:38.851 28.509 - 28.625: 99.6793% ( 1) 00:08:38.851 28.625 - 28.742: 99.6999% ( 2) 00:08:38.851 28.858 - 28.975: 99.7103% ( 1) 00:08:38.851 28.975 - 29.091: 99.7206% ( 1) 00:08:38.851 29.789 - 30.022: 99.7517% ( 3) 00:08:38.851 30.255 - 30.487: 99.7724% ( 2) 00:08:38.851 30.720 - 30.953: 99.7931% ( 2) 00:08:38.851 30.953 - 31.185: 99.8034% ( 1) 00:08:38.851 31.418 - 31.651: 99.8138% ( 1) 00:08:38.851 31.884 - 32.116: 99.8241% ( 1) 00:08:38.851 32.349 - 32.582: 99.8448% ( 2) 00:08:38.851 32.582 - 32.815: 99.8551% ( 1) 00:08:38.851 32.815 - 33.047: 99.8758% ( 2) 00:08:38.851 33.745 - 33.978: 99.8862% ( 1) 00:08:38.851 34.676 - 34.909: 99.8965% ( 1) 00:08:38.851 36.073 - 36.305: 99.9069% ( 1) 00:08:38.851 36.305 - 36.538: 99.9172% ( 1) 00:08:38.851 38.400 - 38.633: 99.9276% ( 1) 00:08:38.851 39.098 - 39.331: 99.9379% ( 1) 00:08:38.851 39.796 - 40.029: 99.9483% ( 1) 00:08:38.851 41.425 - 41.658: 99.9586% ( 1) 00:08:38.851 45.149 - 45.382: 99.9690% ( 1) 00:08:38.851 47.476 - 47.709: 99.9793% ( 1) 00:08:38.851 49.338 - 49.571: 99.9897% ( 1) 00:08:38.851 517.585 - 521.309: 100.0000% ( 1) 00:08:38.851 00:08:38.851 00:08:38.851 real 0m1.271s 00:08:38.851 user 0m1.092s 00:08:38.851 sys 0m0.116s 00:08:38.851 05:56:01 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.851 05:56:01 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:38.851 ************************************ 00:08:38.851 END TEST nvme_overhead 00:08:38.851 ************************************ 00:08:38.851 05:56:01 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:38.851 05:56:01 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:38.851 05:56:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.851 05:56:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.851 ************************************ 00:08:38.851 START TEST nvme_arbitration 00:08:38.851 ************************************ 00:08:38.851 05:56:01 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:42.140 Initializing NVMe Controllers 00:08:42.140 Attached to 0000:00:10.0 00:08:42.140 Attached to 0000:00:11.0 00:08:42.140 Attached to 0000:00:13.0 00:08:42.140 Attached to 0000:00:12.0 00:08:42.140 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:42.140 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:42.140 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:42.140 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:42.140 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:42.140 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:42.140 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:42.140 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:42.140 Initialization complete. Launching workers. 00:08:42.140 Starting thread on core 1 with urgent priority queue 00:08:42.140 Starting thread on core 2 with urgent priority queue 00:08:42.140 Starting thread on core 3 with urgent priority queue 00:08:42.140 Starting thread on core 0 with urgent priority queue 00:08:42.140 QEMU NVMe Ctrl (12340 ) core 0: 5184.00 IO/s 19.29 secs/100000 ios 00:08:42.140 QEMU NVMe Ctrl (12342 ) core 0: 5184.00 IO/s 19.29 secs/100000 ios 00:08:42.140 QEMU NVMe Ctrl (12341 ) core 1: 5056.00 IO/s 19.78 secs/100000 ios 00:08:42.140 QEMU NVMe Ctrl (12342 ) core 1: 5056.00 IO/s 19.78 secs/100000 ios 00:08:42.140 QEMU NVMe Ctrl (12343 ) core 2: 5184.00 IO/s 19.29 secs/100000 ios 00:08:42.140 QEMU NVMe Ctrl (12342 ) core 3: 5333.33 IO/s 18.75 secs/100000 ios 00:08:42.140 ======================================================== 00:08:42.140 00:08:42.140 00:08:42.140 real 0m3.292s 00:08:42.140 user 0m9.039s 00:08:42.140 sys 0m0.133s 00:08:42.140 05:56:05 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.140 05:56:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:42.140 ************************************ 00:08:42.140 END TEST nvme_arbitration 00:08:42.140 ************************************ 00:08:42.140 05:56:05 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:42.140 05:56:05 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:42.140 05:56:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.140 05:56:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.140 ************************************ 00:08:42.140 START TEST nvme_single_aen 00:08:42.140 ************************************ 00:08:42.140 05:56:05 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:42.398 Asynchronous Event Request test 00:08:42.398 Attached to 0000:00:10.0 00:08:42.398 Attached to 0000:00:11.0 00:08:42.398 Attached to 0000:00:13.0 00:08:42.398 Attached to 0000:00:12.0 00:08:42.398 Reset controller to setup AER completions for this process 00:08:42.398 Registering asynchronous event callbacks... 00:08:42.398 Getting orig temperature thresholds of all controllers 00:08:42.398 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.398 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.398 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.398 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.398 Setting all controllers temperature threshold low to trigger AER 00:08:42.398 Waiting for all controllers temperature threshold to be set lower 00:08:42.398 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.398 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:42.398 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.398 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:42.398 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.398 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:42.398 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.398 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:42.398 Waiting for all controllers to trigger AER and reset threshold 00:08:42.398 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.398 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.398 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.398 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.398 Cleaning up... 00:08:42.398 00:08:42.398 real 0m0.282s 00:08:42.398 user 0m0.088s 00:08:42.398 sys 0m0.132s 00:08:42.398 05:56:05 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.398 05:56:05 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:42.398 ************************************ 00:08:42.398 END TEST nvme_single_aen 00:08:42.398 ************************************ 00:08:42.398 05:56:05 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:42.398 05:56:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.398 05:56:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.398 05:56:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.398 ************************************ 00:08:42.398 START TEST nvme_doorbell_aers 00:08:42.398 ************************************ 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.398 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:42.656 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:42.656 05:56:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.656 05:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:42.656 05:56:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:42.914 [2024-12-08 05:56:05.719453] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:08:52.922 Executing: test_write_invalid_db 00:08:52.922 Waiting for AER completion... 00:08:52.922 Failure: test_write_invalid_db 00:08:52.922 00:08:52.922 Executing: test_invalid_db_write_overflow_sq 00:08:52.922 Waiting for AER completion... 00:08:52.922 Failure: test_invalid_db_write_overflow_sq 00:08:52.922 00:08:52.922 Executing: test_invalid_db_write_overflow_cq 00:08:52.922 Waiting for AER completion... 00:08:52.922 Failure: test_invalid_db_write_overflow_cq 00:08:52.922 00:08:52.922 05:56:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:52.922 05:56:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:52.922 [2024-12-08 05:56:15.750726] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:02.901 Executing: test_write_invalid_db 00:09:02.901 Waiting for AER completion... 00:09:02.902 Failure: test_write_invalid_db 00:09:02.902 00:09:02.902 Executing: test_invalid_db_write_overflow_sq 00:09:02.902 Waiting for AER completion... 00:09:02.902 Failure: test_invalid_db_write_overflow_sq 00:09:02.902 00:09:02.902 Executing: test_invalid_db_write_overflow_cq 00:09:02.902 Waiting for AER completion... 00:09:02.902 Failure: test_invalid_db_write_overflow_cq 00:09:02.902 00:09:02.902 05:56:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:02.902 05:56:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.902 [2024-12-08 05:56:25.791346] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:12.873 Executing: test_write_invalid_db 00:09:12.873 Waiting for AER completion... 00:09:12.873 Failure: test_write_invalid_db 00:09:12.873 00:09:12.873 Executing: test_invalid_db_write_overflow_sq 00:09:12.873 Waiting for AER completion... 00:09:12.873 Failure: test_invalid_db_write_overflow_sq 00:09:12.873 00:09:12.873 Executing: test_invalid_db_write_overflow_cq 00:09:12.873 Waiting for AER completion... 00:09:12.873 Failure: test_invalid_db_write_overflow_cq 00:09:12.873 00:09:12.873 05:56:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:12.873 05:56:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.873 [2024-12-08 05:56:35.834768] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:22.884 Executing: test_write_invalid_db 00:09:22.884 Waiting for AER completion... 00:09:22.884 Failure: test_write_invalid_db 00:09:22.884 00:09:22.884 Executing: test_invalid_db_write_overflow_sq 00:09:22.884 Waiting for AER completion... 00:09:22.884 Failure: test_invalid_db_write_overflow_sq 00:09:22.884 00:09:22.884 Executing: test_invalid_db_write_overflow_cq 00:09:22.884 Waiting for AER completion... 00:09:22.884 Failure: test_invalid_db_write_overflow_cq 00:09:22.884 00:09:22.884 00:09:22.884 real 0m40.247s 00:09:22.884 user 0m34.265s 00:09:22.884 sys 0m5.657s 00:09:22.884 05:56:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.884 05:56:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:22.884 ************************************ 00:09:22.884 END TEST nvme_doorbell_aers 00:09:22.884 ************************************ 00:09:22.884 05:56:45 nvme -- nvme/nvme.sh@97 -- # uname 00:09:22.884 05:56:45 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:22.884 05:56:45 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:22.884 05:56:45 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:22.884 05:56:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.884 05:56:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.884 ************************************ 00:09:22.884 START TEST nvme_multi_aen 00:09:22.884 ************************************ 00:09:22.884 05:56:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:22.884 [2024-12-08 05:56:45.927334] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:22.884 [2024-12-08 05:56:45.927525] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:22.884 [2024-12-08 05:56:45.927563] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.929157] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.929232] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.929251] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.930646] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.930708] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.930725] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.932004] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.932077] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 [2024-12-08 05:56:45.932094] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76117) is not found. Dropping the request. 00:09:23.142 Child process pid: 76632 00:09:23.142 [Child] Asynchronous Event Request test 00:09:23.142 [Child] Attached to 0000:00:10.0 00:09:23.142 [Child] Attached to 0000:00:11.0 00:09:23.142 [Child] Attached to 0000:00:13.0 00:09:23.142 [Child] Attached to 0000:00:12.0 00:09:23.142 [Child] Registering asynchronous event callbacks... 00:09:23.142 [Child] Getting orig temperature thresholds of all controllers 00:09:23.142 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.142 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.142 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.142 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.142 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:23.142 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.142 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.142 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.142 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.142 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.142 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.142 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.142 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.142 [Child] Cleaning up... 00:09:23.401 Asynchronous Event Request test 00:09:23.401 Attached to 0000:00:10.0 00:09:23.401 Attached to 0000:00:11.0 00:09:23.401 Attached to 0000:00:13.0 00:09:23.401 Attached to 0000:00:12.0 00:09:23.401 Reset controller to setup AER completions for this process 00:09:23.401 Registering asynchronous event callbacks... 00:09:23.401 Getting orig temperature thresholds of all controllers 00:09:23.401 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.401 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.401 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.401 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.401 Setting all controllers temperature threshold low to trigger AER 00:09:23.401 Waiting for all controllers temperature threshold to be set lower 00:09:23.401 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.401 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:23.401 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.401 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:23.401 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.401 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:23.401 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.401 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:23.401 Waiting for all controllers to trigger AER and reset threshold 00:09:23.401 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.401 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.401 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.401 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.401 Cleaning up... 00:09:23.401 00:09:23.401 real 0m0.510s 00:09:23.401 user 0m0.165s 00:09:23.401 sys 0m0.218s 00:09:23.401 05:56:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.401 05:56:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:23.401 ************************************ 00:09:23.401 END TEST nvme_multi_aen 00:09:23.401 ************************************ 00:09:23.401 05:56:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:23.401 05:56:46 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:23.401 05:56:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.401 05:56:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.401 ************************************ 00:09:23.401 START TEST nvme_startup 00:09:23.401 ************************************ 00:09:23.401 05:56:46 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:23.660 Initializing NVMe Controllers 00:09:23.660 Attached to 0000:00:10.0 00:09:23.660 Attached to 0000:00:11.0 00:09:23.660 Attached to 0000:00:13.0 00:09:23.660 Attached to 0000:00:12.0 00:09:23.660 Initialization complete. 00:09:23.660 Time used:161289.688 (us). 00:09:23.660 00:09:23.660 real 0m0.233s 00:09:23.660 user 0m0.075s 00:09:23.660 sys 0m0.110s 00:09:23.660 05:56:46 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.660 05:56:46 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:23.660 ************************************ 00:09:23.660 END TEST nvme_startup 00:09:23.660 ************************************ 00:09:23.660 05:56:46 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:23.660 05:56:46 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.660 05:56:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.660 05:56:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.660 ************************************ 00:09:23.660 START TEST nvme_multi_secondary 00:09:23.660 ************************************ 00:09:23.660 05:56:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:23.660 05:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76688 00:09:23.660 05:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:23.660 05:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76689 00:09:23.660 05:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:23.660 05:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:26.943 Initializing NVMe Controllers 00:09:26.943 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.943 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.943 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.943 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.943 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:26.943 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:26.943 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:26.943 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:26.943 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:26.943 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:26.943 Initialization complete. Launching workers. 00:09:26.943 ======================================================== 00:09:26.943 Latency(us) 00:09:26.943 Device Information : IOPS MiB/s Average min max 00:09:26.943 PCIE (0000:00:10.0) NSID 1 from core 1: 5987.19 23.39 2670.56 1069.79 6285.33 00:09:26.943 PCIE (0000:00:11.0) NSID 1 from core 1: 5987.19 23.39 2671.85 1088.24 6488.09 00:09:26.943 PCIE (0000:00:13.0) NSID 1 from core 1: 5987.19 23.39 2671.81 1087.79 5773.23 00:09:26.943 PCIE (0000:00:12.0) NSID 1 from core 1: 5987.19 23.39 2671.93 1088.31 5490.23 00:09:26.943 PCIE (0000:00:12.0) NSID 2 from core 1: 5987.19 23.39 2672.23 1071.43 5286.19 00:09:26.943 PCIE (0000:00:12.0) NSID 3 from core 1: 5987.19 23.39 2672.18 1045.32 6082.92 00:09:26.943 ======================================================== 00:09:26.943 Total : 35923.12 140.32 2671.76 1045.32 6488.09 00:09:26.943 00:09:26.943 Initializing NVMe Controllers 00:09:26.943 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.943 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.943 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.943 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.943 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:26.943 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:26.943 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:26.943 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:26.943 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:26.943 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:26.943 Initialization complete. Launching workers. 00:09:26.943 ======================================================== 00:09:26.943 Latency(us) 00:09:26.943 Device Information : IOPS MiB/s Average min max 00:09:26.943 PCIE (0000:00:10.0) NSID 1 from core 2: 2619.58 10.23 6099.77 1522.07 12392.05 00:09:26.943 PCIE (0000:00:11.0) NSID 1 from core 2: 2619.58 10.23 6098.64 1519.90 12661.51 00:09:26.943 PCIE (0000:00:13.0) NSID 1 from core 2: 2619.58 10.23 6098.51 1517.85 12340.02 00:09:26.943 PCIE (0000:00:12.0) NSID 1 from core 2: 2619.58 10.23 6098.25 1487.03 12444.41 00:09:26.943 PCIE (0000:00:12.0) NSID 2 from core 2: 2619.58 10.23 6097.74 1223.87 13009.48 00:09:26.943 PCIE (0000:00:12.0) NSID 3 from core 2: 2619.58 10.23 6097.40 1095.60 13244.93 00:09:26.943 ======================================================== 00:09:26.943 Total : 15717.48 61.40 6098.39 1095.60 13244.93 00:09:26.943 00:09:27.201 05:56:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76688 00:09:29.101 Initializing NVMe Controllers 00:09:29.101 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.101 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.101 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.101 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.101 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:29.101 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:29.101 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:29.101 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:29.101 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:29.101 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:29.101 Initialization complete. Launching workers. 00:09:29.101 ======================================================== 00:09:29.101 Latency(us) 00:09:29.101 Device Information : IOPS MiB/s Average min max 00:09:29.101 PCIE (0000:00:10.0) NSID 1 from core 0: 8265.58 32.29 1934.06 980.53 9355.57 00:09:29.101 PCIE (0000:00:11.0) NSID 1 from core 0: 8265.58 32.29 1935.10 1005.17 9166.58 00:09:29.101 PCIE (0000:00:13.0) NSID 1 from core 0: 8265.58 32.29 1935.01 912.61 9191.95 00:09:29.101 PCIE (0000:00:12.0) NSID 1 from core 0: 8265.58 32.29 1934.91 770.15 9203.00 00:09:29.101 PCIE (0000:00:12.0) NSID 2 from core 0: 8265.58 32.29 1934.80 679.92 9210.23 00:09:29.101 PCIE (0000:00:12.0) NSID 3 from core 0: 8265.58 32.29 1934.69 556.32 9401.75 00:09:29.101 ======================================================== 00:09:29.101 Total : 49593.48 193.72 1934.76 556.32 9401.75 00:09:29.101 00:09:29.101 05:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76689 00:09:29.101 05:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76764 00:09:29.101 05:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:29.101 05:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76765 00:09:29.101 05:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:29.101 05:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:32.408 Initializing NVMe Controllers 00:09:32.408 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.408 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.408 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.408 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.408 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:32.408 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:32.408 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:32.408 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:32.408 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:32.408 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:32.408 Initialization complete. Launching workers. 00:09:32.408 ======================================================== 00:09:32.409 Latency(us) 00:09:32.409 Device Information : IOPS MiB/s Average min max 00:09:32.409 PCIE (0000:00:10.0) NSID 1 from core 0: 5662.72 22.12 2823.64 986.88 6863.15 00:09:32.409 PCIE (0000:00:11.0) NSID 1 from core 0: 5662.72 22.12 2824.95 1010.75 7167.61 00:09:32.409 PCIE (0000:00:13.0) NSID 1 from core 0: 5662.72 22.12 2824.84 1025.76 7150.86 00:09:32.409 PCIE (0000:00:12.0) NSID 1 from core 0: 5668.05 22.14 2822.22 1042.77 6953.46 00:09:32.409 PCIE (0000:00:12.0) NSID 2 from core 0: 5668.05 22.14 2822.28 1033.33 7343.86 00:09:32.409 PCIE (0000:00:12.0) NSID 3 from core 0: 5668.05 22.14 2822.24 1025.47 6911.71 00:09:32.409 ======================================================== 00:09:32.409 Total : 33992.33 132.78 2823.36 986.88 7343.86 00:09:32.409 00:09:32.667 Initializing NVMe Controllers 00:09:32.667 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.667 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.667 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.667 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.667 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:32.667 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:32.667 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:32.667 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:32.667 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:32.667 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:32.667 Initialization complete. Launching workers. 00:09:32.667 ======================================================== 00:09:32.667 Latency(us) 00:09:32.667 Device Information : IOPS MiB/s Average min max 00:09:32.667 PCIE (0000:00:10.0) NSID 1 from core 1: 5641.25 22.04 2834.32 1010.40 5114.84 00:09:32.667 PCIE (0000:00:11.0) NSID 1 from core 1: 5641.25 22.04 2835.41 1034.18 5741.12 00:09:32.667 PCIE (0000:00:13.0) NSID 1 from core 1: 5641.25 22.04 2835.17 841.47 5464.60 00:09:32.667 PCIE (0000:00:12.0) NSID 1 from core 1: 5641.25 22.04 2834.93 728.81 5464.74 00:09:32.667 PCIE (0000:00:12.0) NSID 2 from core 1: 5641.25 22.04 2834.66 601.59 5353.19 00:09:32.667 PCIE (0000:00:12.0) NSID 3 from core 1: 5641.25 22.04 2834.38 499.36 5503.08 00:09:32.667 ======================================================== 00:09:32.667 Total : 33847.50 132.22 2834.81 499.36 5741.12 00:09:32.667 00:09:34.571 Initializing NVMe Controllers 00:09:34.571 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:34.571 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:34.571 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:34.571 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:34.571 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:34.571 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:34.571 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:34.571 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:34.571 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:34.571 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:34.571 Initialization complete. Launching workers. 00:09:34.571 ======================================================== 00:09:34.571 Latency(us) 00:09:34.571 Device Information : IOPS MiB/s Average min max 00:09:34.571 PCIE (0000:00:10.0) NSID 1 from core 2: 3794.55 14.82 4214.27 988.41 14088.40 00:09:34.571 PCIE (0000:00:11.0) NSID 1 from core 2: 3794.55 14.82 4215.57 1010.11 13869.64 00:09:34.571 PCIE (0000:00:13.0) NSID 1 from core 2: 3794.55 14.82 4215.93 1012.01 13053.68 00:09:34.571 PCIE (0000:00:12.0) NSID 1 from core 2: 3794.55 14.82 4215.43 871.56 13819.94 00:09:34.571 PCIE (0000:00:12.0) NSID 2 from core 2: 3794.55 14.82 4215.30 730.45 16376.16 00:09:34.571 PCIE (0000:00:12.0) NSID 3 from core 2: 3794.55 14.82 4215.15 557.07 15774.33 00:09:34.571 ======================================================== 00:09:34.571 Total : 22767.33 88.93 4215.28 557.07 16376.16 00:09:34.571 00:09:34.571 05:56:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76764 00:09:34.571 05:56:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76765 00:09:34.571 00:09:34.571 real 0m10.987s 00:09:34.571 user 0m18.404s 00:09:34.571 sys 0m0.823s 00:09:34.571 05:56:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.571 05:56:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:34.571 ************************************ 00:09:34.571 END TEST nvme_multi_secondary 00:09:34.571 ************************************ 00:09:34.571 05:56:57 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:34.571 05:56:57 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:34.571 05:56:57 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75714 ]] 00:09:34.571 05:56:57 nvme -- common/autotest_common.sh@1090 -- # kill 75714 00:09:34.571 05:56:57 nvme -- common/autotest_common.sh@1091 -- # wait 75714 00:09:34.571 [2024-12-08 05:56:57.584791] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.584914] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.584969] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.585030] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.586033] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.586125] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.586173] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.586272] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.587258] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.587347] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.587394] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.587463] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.588352] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.588461] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.588510] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.571 [2024-12-08 05:56:57.588554] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76631) is not found. Dropping the request. 00:09:34.831 05:56:57 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:34.831 05:56:57 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:34.831 05:56:57 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:34.831 05:56:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.831 05:56:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.831 05:56:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.831 ************************************ 00:09:34.831 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:34.831 ************************************ 00:09:34.831 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:34.831 * Looking for test storage... 00:09:34.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.831 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.831 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.831 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.091 --rc genhtml_branch_coverage=1 00:09:35.091 --rc genhtml_function_coverage=1 00:09:35.091 --rc genhtml_legend=1 00:09:35.091 --rc geninfo_all_blocks=1 00:09:35.091 --rc geninfo_unexecuted_blocks=1 00:09:35.091 00:09:35.091 ' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.091 --rc genhtml_branch_coverage=1 00:09:35.091 --rc genhtml_function_coverage=1 00:09:35.091 --rc genhtml_legend=1 00:09:35.091 --rc geninfo_all_blocks=1 00:09:35.091 --rc geninfo_unexecuted_blocks=1 00:09:35.091 00:09:35.091 ' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.091 --rc genhtml_branch_coverage=1 00:09:35.091 --rc genhtml_function_coverage=1 00:09:35.091 --rc genhtml_legend=1 00:09:35.091 --rc geninfo_all_blocks=1 00:09:35.091 --rc geninfo_unexecuted_blocks=1 00:09:35.091 00:09:35.091 ' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.091 --rc genhtml_branch_coverage=1 00:09:35.091 --rc genhtml_function_coverage=1 00:09:35.091 --rc genhtml_legend=1 00:09:35.091 --rc geninfo_all_blocks=1 00:09:35.091 --rc geninfo_unexecuted_blocks=1 00:09:35.091 00:09:35.091 ' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:35.091 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76925 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76925 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76925 ']' 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.092 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.092 [2024-12-08 05:56:58.057250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.092 [2024-12-08 05:56:58.057390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76925 ] 00:09:35.350 [2024-12-08 05:56:58.216767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.350 [2024-12-08 05:56:58.256640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.350 [2024-12-08 05:56:58.256791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.350 [2024-12-08 05:56:58.256872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.350 [2024-12-08 05:56:58.256988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.609 nvme0n1 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_4zVvh.txt 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.609 true 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733637418 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76941 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:35.609 05:56:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:37.511 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:37.511 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.511 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:37.511 [2024-12-08 05:57:00.532319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:37.511 [2024-12-08 05:57:00.532854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:37.511 [2024-12-08 05:57:00.533037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:37.511 [2024-12-08 05:57:00.533307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:37.511 [2024-12-08 05:57:00.535416] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.511 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76941 00:09:37.511 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.511 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76941 00:09:37.511 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76941 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_4zVvh.txt 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_4zVvh.txt 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76925 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76925 ']' 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76925 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76925 00:09:37.771 killing process with pid 76925 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76925' 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76925 00:09:37.771 05:57:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76925 00:09:38.030 05:57:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:38.030 05:57:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:38.030 00:09:38.030 real 0m3.319s 00:09:38.030 user 0m11.437s 00:09:38.030 sys 0m0.523s 00:09:38.030 05:57:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.030 ************************************ 00:09:38.030 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:38.030 ************************************ 00:09:38.030 05:57:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:38.030 05:57:01 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:38.030 05:57:01 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:38.030 05:57:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.030 05:57:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.030 05:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.288 ************************************ 00:09:38.288 START TEST nvme_fio 00:09:38.288 ************************************ 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:38.288 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:38.288 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:38.547 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:38.547 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:38.806 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:38.806 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:38.806 05:57:01 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.064 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.064 fio-3.35 00:09:39.064 Starting 1 thread 00:09:42.347 00:09:42.347 test: (groupid=0, jobs=1): err= 0: pid=77065: Sun Dec 8 05:57:05 2024 00:09:42.347 read: IOPS=17.1k, BW=66.6MiB/s (69.9MB/s)(133MiB/2001msec) 00:09:42.347 slat (usec): min=4, max=614, avg= 5.67, stdev= 3.82 00:09:42.347 clat (usec): min=408, max=9899, avg=3730.29, stdev=469.76 00:09:42.347 lat (usec): min=413, max=9983, avg=3735.96, stdev=470.39 00:09:42.347 clat percentiles (usec): 00:09:42.347 | 1.00th=[ 2638], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:09:42.347 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3720], 00:09:42.347 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 4047], 95.00th=[ 4555], 00:09:42.347 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7439], 99.95th=[ 8225], 00:09:42.347 | 99.99th=[ 9634] 00:09:42.347 bw ( KiB/s): min=61085, max=72056, per=99.67%, avg=68017.67, stdev=6031.05, samples=3 00:09:42.347 iops : min=15271, max=18014, avg=17004.33, stdev=1507.91, samples=3 00:09:42.347 write: IOPS=17.1k, BW=66.8MiB/s (70.0MB/s)(134MiB/2001msec); 0 zone resets 00:09:42.347 slat (nsec): min=4549, max=92941, avg=5799.12, stdev=1906.80 00:09:42.347 clat (usec): min=245, max=9707, avg=3740.78, stdev=485.27 00:09:42.347 lat (usec): min=250, max=9718, avg=3746.58, stdev=485.86 00:09:42.347 clat percentiles (usec): 00:09:42.347 | 1.00th=[ 2573], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:09:42.347 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3720], 00:09:42.347 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 4080], 95.00th=[ 4555], 00:09:42.347 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7635], 99.95th=[ 8291], 00:09:42.347 | 99.99th=[ 9503] 00:09:42.347 bw ( KiB/s): min=61453, max=71632, per=99.23%, avg=67852.33, stdev=5572.25, samples=3 00:09:42.347 iops : min=15363, max=17908, avg=16963.00, stdev=1393.21, samples=3 00:09:42.347 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:42.347 lat (msec) : 2=0.35%, 4=88.47%, 10=11.14% 00:09:42.347 cpu : usr=98.75%, sys=0.30%, ctx=4, majf=0, minf=626 00:09:42.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:42.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.347 issued rwts: total=34137,34205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.347 00:09:42.347 Run status group 0 (all jobs): 00:09:42.347 READ: bw=66.6MiB/s (69.9MB/s), 66.6MiB/s-66.6MiB/s (69.9MB/s-69.9MB/s), io=133MiB (140MB), run=2001-2001msec 00:09:42.347 WRITE: bw=66.8MiB/s (70.0MB/s), 66.8MiB/s-66.8MiB/s (70.0MB/s-70.0MB/s), io=134MiB (140MB), run=2001-2001msec 00:09:42.347 ----------------------------------------------------- 00:09:42.347 Suppressions used: 00:09:42.347 count bytes template 00:09:42.347 1 32 /usr/src/fio/parse.c 00:09:42.347 1 8 libtcmalloc_minimal.so 00:09:42.347 ----------------------------------------------------- 00:09:42.347 00:09:42.347 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:42.347 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:42.347 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.347 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:42.605 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:42.605 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.864 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:42.864 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:42.864 05:57:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:43.121 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:43.121 fio-3.35 00:09:43.121 Starting 1 thread 00:09:46.423 00:09:46.424 test: (groupid=0, jobs=1): err= 0: pid=77132: Sun Dec 8 05:57:09 2024 00:09:46.424 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:09:46.424 slat (nsec): min=4269, max=58785, avg=6103.25, stdev=2276.08 00:09:46.424 clat (usec): min=267, max=8514, avg=4031.30, stdev=512.21 00:09:46.424 lat (usec): min=273, max=8573, avg=4037.40, stdev=512.91 00:09:46.424 clat percentiles (usec): 00:09:46.424 | 1.00th=[ 3163], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3654], 00:09:46.424 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 4015], 00:09:46.424 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5014], 00:09:46.424 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 7439], 99.95th=[ 7832], 00:09:46.424 | 99.99th=[ 8356] 00:09:46.424 bw ( KiB/s): min=60720, max=66880, per=100.00%, avg=63200.00, stdev=3250.60, samples=3 00:09:46.424 iops : min=15180, max=16720, avg=15800.00, stdev=812.65, samples=3 00:09:46.424 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:09:46.424 slat (nsec): min=4423, max=49907, avg=6324.06, stdev=2316.15 00:09:46.424 clat (usec): min=296, max=8396, avg=4044.66, stdev=514.61 00:09:46.424 lat (usec): min=302, max=8408, avg=4050.98, stdev=515.30 00:09:46.424 clat percentiles (usec): 00:09:46.424 | 1.00th=[ 3195], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3654], 00:09:46.424 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4047], 00:09:46.424 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5014], 00:09:46.424 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 7439], 99.95th=[ 7767], 00:09:46.424 | 99.99th=[ 8160] 00:09:46.424 bw ( KiB/s): min=61008, max=66296, per=99.48%, avg=62906.67, stdev=2942.33, samples=3 00:09:46.424 iops : min=15252, max=16574, avg=15726.67, stdev=735.58, samples=3 00:09:46.424 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:46.424 lat (msec) : 2=0.05%, 4=58.17%, 10=41.74% 00:09:46.424 cpu : usr=99.00%, sys=0.00%, ctx=4, majf=0, minf=626 00:09:46.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:46.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.424 issued rwts: total=31597,31634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.424 00:09:46.424 Run status group 0 (all jobs): 00:09:46.424 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:09:46.424 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:09:46.424 ----------------------------------------------------- 00:09:46.424 Suppressions used: 00:09:46.424 count bytes template 00:09:46.424 1 32 /usr/src/fio/parse.c 00:09:46.424 1 8 libtcmalloc_minimal.so 00:09:46.424 ----------------------------------------------------- 00:09:46.424 00:09:46.424 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:46.424 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:46.424 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:46.424 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:46.683 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:46.683 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:46.942 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:46.942 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:46.942 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:46.943 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:46.943 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:46.943 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:46.943 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:46.943 05:57:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:47.201 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:47.201 fio-3.35 00:09:47.201 Starting 1 thread 00:09:50.484 00:09:50.484 test: (groupid=0, jobs=1): err= 0: pid=77188: Sun Dec 8 05:57:13 2024 00:09:50.484 read: IOPS=16.5k, BW=64.6MiB/s (67.7MB/s)(129MiB/2001msec) 00:09:50.484 slat (nsec): min=4099, max=91590, avg=5874.57, stdev=2160.88 00:09:50.484 clat (usec): min=257, max=10856, avg=3850.41, stdev=563.44 00:09:50.484 lat (usec): min=262, max=10935, avg=3856.28, stdev=564.18 00:09:50.484 clat percentiles (usec): 00:09:50.484 | 1.00th=[ 2933], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:09:50.484 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3818], 00:09:50.484 | 70.00th=[ 3884], 80.00th=[ 4047], 90.00th=[ 4359], 95.00th=[ 4621], 00:09:50.484 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[ 9634], 00:09:50.484 | 99.99th=[10814] 00:09:50.484 bw ( KiB/s): min=63456, max=68632, per=99.24%, avg=65608.00, stdev=2695.93, samples=3 00:09:50.484 iops : min=15864, max=17158, avg=16402.00, stdev=673.98, samples=3 00:09:50.484 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(129MiB/2001msec); 0 zone resets 00:09:50.484 slat (nsec): min=4173, max=71789, avg=6020.33, stdev=2159.52 00:09:50.484 clat (usec): min=232, max=10781, avg=3863.53, stdev=563.22 00:09:50.484 lat (usec): min=238, max=10792, avg=3869.55, stdev=563.97 00:09:50.484 clat percentiles (usec): 00:09:50.484 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:09:50.484 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:09:50.484 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4359], 95.00th=[ 4686], 00:09:50.484 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 8848], 99.95th=[ 9765], 00:09:50.484 | 99.99th=[10683] 00:09:50.484 bw ( KiB/s): min=63752, max=68376, per=98.80%, avg=65432.00, stdev=2558.05, samples=3 00:09:50.484 iops : min=15938, max=17094, avg=16358.00, stdev=639.51, samples=3 00:09:50.484 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:50.484 lat (msec) : 2=0.18%, 4=76.14%, 10=23.61%, 20=0.04% 00:09:50.484 cpu : usr=98.60%, sys=0.35%, ctx=432, majf=0, minf=627 00:09:50.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.484 issued rwts: total=33071,33129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.484 00:09:50.484 Run status group 0 (all jobs): 00:09:50.484 READ: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=129MiB (135MB), run=2001-2001msec 00:09:50.484 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=129MiB (136MB), run=2001-2001msec 00:09:50.484 ----------------------------------------------------- 00:09:50.484 Suppressions used: 00:09:50.484 count bytes template 00:09:50.484 1 32 /usr/src/fio/parse.c 00:09:50.484 1 8 libtcmalloc_minimal.so 00:09:50.484 ----------------------------------------------------- 00:09:50.484 00:09:50.742 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.742 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:50.742 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:50.742 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:50.742 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:50.742 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:51.000 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:51.000 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:51.000 05:57:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:51.000 05:57:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:51.000 05:57:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.000 05:57:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:51.000 05:57:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:51.258 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:51.258 fio-3.35 00:09:51.258 Starting 1 thread 00:09:54.556 00:09:54.556 test: (groupid=0, jobs=1): err= 0: pid=77254: Sun Dec 8 05:57:17 2024 00:09:54.556 read: IOPS=17.0k, BW=66.3MiB/s (69.6MB/s)(133MiB/2001msec) 00:09:54.556 slat (nsec): min=4377, max=60020, avg=5763.26, stdev=1815.55 00:09:54.556 clat (usec): min=298, max=11686, avg=3744.88, stdev=448.52 00:09:54.556 lat (usec): min=303, max=11746, avg=3750.64, stdev=449.14 00:09:54.556 clat percentiles (usec): 00:09:54.556 | 1.00th=[ 3228], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3490], 00:09:54.556 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3687], 00:09:54.556 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4490], 00:09:54.556 | 99.00th=[ 4817], 99.50th=[ 6325], 99.90th=[ 7570], 99.95th=[10028], 00:09:54.556 | 99.99th=[11469] 00:09:54.556 bw ( KiB/s): min=65656, max=71176, per=99.72%, avg=67754.67, stdev=2988.26, samples=3 00:09:54.556 iops : min=16414, max=17794, avg=16938.67, stdev=747.06, samples=3 00:09:54.556 write: IOPS=17.0k, BW=66.5MiB/s (69.7MB/s)(133MiB/2001msec); 0 zone resets 00:09:54.556 slat (nsec): min=4362, max=44443, avg=5942.52, stdev=1778.94 00:09:54.556 clat (usec): min=281, max=11548, avg=3758.49, stdev=448.86 00:09:54.556 lat (usec): min=285, max=11559, avg=3764.44, stdev=449.48 00:09:54.556 clat percentiles (usec): 00:09:54.556 | 1.00th=[ 3228], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:09:54.556 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:09:54.556 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4490], 00:09:54.556 | 99.00th=[ 4817], 99.50th=[ 6063], 99.90th=[ 7898], 99.95th=[10159], 00:09:54.556 | 99.99th=[11338] 00:09:54.556 bw ( KiB/s): min=66088, max=70720, per=99.30%, avg=67634.67, stdev=2671.98, samples=3 00:09:54.556 iops : min=16522, max=17680, avg=16908.67, stdev=668.00, samples=3 00:09:54.556 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:09:54.556 lat (msec) : 2=0.06%, 4=81.98%, 10=17.86%, 20=0.05% 00:09:54.556 cpu : usr=98.90%, sys=0.25%, ctx=5, majf=0, minf=625 00:09:54.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:54.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:54.556 issued rwts: total=33988,34074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:54.556 00:09:54.556 Run status group 0 (all jobs): 00:09:54.556 READ: bw=66.3MiB/s (69.6MB/s), 66.3MiB/s-66.3MiB/s (69.6MB/s-69.6MB/s), io=133MiB (139MB), run=2001-2001msec 00:09:54.556 WRITE: bw=66.5MiB/s (69.7MB/s), 66.5MiB/s-66.5MiB/s (69.7MB/s-69.7MB/s), io=133MiB (140MB), run=2001-2001msec 00:09:54.816 ----------------------------------------------------- 00:09:54.816 Suppressions used: 00:09:54.816 count bytes template 00:09:54.816 1 32 /usr/src/fio/parse.c 00:09:54.816 1 8 libtcmalloc_minimal.so 00:09:54.816 ----------------------------------------------------- 00:09:54.816 00:09:54.816 05:57:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:54.816 05:57:17 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:54.816 00:09:54.816 real 0m16.665s 00:09:54.816 user 0m13.618s 00:09:54.816 sys 0m1.441s 00:09:54.816 05:57:17 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.816 ************************************ 00:09:54.816 05:57:17 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:54.816 END TEST nvme_fio 00:09:54.816 ************************************ 00:09:54.816 00:09:54.816 real 1m25.837s 00:09:54.816 user 3m31.689s 00:09:54.816 sys 0m12.807s 00:09:54.816 05:57:17 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.816 05:57:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.816 ************************************ 00:09:54.816 END TEST nvme 00:09:54.816 ************************************ 00:09:54.816 05:57:17 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:54.816 05:57:17 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:54.816 05:57:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.816 05:57:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.816 05:57:17 -- common/autotest_common.sh@10 -- # set +x 00:09:54.816 ************************************ 00:09:54.816 START TEST nvme_scc 00:09:54.816 ************************************ 00:09:54.816 05:57:17 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:55.076 * Looking for test storage... 00:09:55.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.076 05:57:17 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:55.076 05:57:17 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:55.076 05:57:17 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:55.076 05:57:18 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.076 05:57:18 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:55.077 05:57:18 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.077 05:57:18 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:55.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.077 --rc genhtml_branch_coverage=1 00:09:55.077 --rc genhtml_function_coverage=1 00:09:55.077 --rc genhtml_legend=1 00:09:55.077 --rc geninfo_all_blocks=1 00:09:55.077 --rc geninfo_unexecuted_blocks=1 00:09:55.077 00:09:55.077 ' 00:09:55.077 05:57:18 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:55.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.077 --rc genhtml_branch_coverage=1 00:09:55.077 --rc genhtml_function_coverage=1 00:09:55.077 --rc genhtml_legend=1 00:09:55.077 --rc geninfo_all_blocks=1 00:09:55.077 --rc geninfo_unexecuted_blocks=1 00:09:55.077 00:09:55.077 ' 00:09:55.077 05:57:18 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:55.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.077 --rc genhtml_branch_coverage=1 00:09:55.077 --rc genhtml_function_coverage=1 00:09:55.077 --rc genhtml_legend=1 00:09:55.077 --rc geninfo_all_blocks=1 00:09:55.077 --rc geninfo_unexecuted_blocks=1 00:09:55.077 00:09:55.077 ' 00:09:55.077 05:57:18 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:55.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.077 --rc genhtml_branch_coverage=1 00:09:55.077 --rc genhtml_function_coverage=1 00:09:55.077 --rc genhtml_legend=1 00:09:55.077 --rc geninfo_all_blocks=1 00:09:55.077 --rc geninfo_unexecuted_blocks=1 00:09:55.077 00:09:55.077 ' 00:09:55.077 05:57:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.077 05:57:18 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.077 05:57:18 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.077 05:57:18 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.077 05:57:18 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.077 05:57:18 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:55.077 05:57:18 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:55.077 05:57:18 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:55.077 05:57:18 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.077 05:57:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:55.077 05:57:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:55.077 05:57:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:55.077 05:57:18 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:55.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.645 Waiting for block devices as requested 00:09:55.645 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.903 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.903 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.903 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.197 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:01.197 05:57:23 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:01.197 05:57:23 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:01.197 05:57:23 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:01.197 05:57:23 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.197 05:57:23 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:01.197 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.198 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.199 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.200 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:01.201 05:57:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:01.201 05:57:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:01.201 05:57:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.201 05:57:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:01.201 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.202 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.203 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:01.204 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.205 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:01.206 05:57:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:01.206 05:57:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:01.206 05:57:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.206 05:57:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.206 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:01.522 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:01.523 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:01.524 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.525 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.526 05:57:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:01.527 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:01.528 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:01.529 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:01.530 05:57:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:01.530 05:57:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:01.530 05:57:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.530 05:57:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:01.530 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.531 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.532 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:01.533 05:57:24 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:01.533 05:57:24 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:01.800 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:01.801 05:57:24 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:01.801 05:57:24 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:01.801 05:57:24 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:01.801 05:57:24 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:02.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:02.626 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.626 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.626 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.884 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.884 05:57:25 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:02.884 05:57:25 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:02.884 05:57:25 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.884 05:57:25 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:02.884 ************************************ 00:10:02.884 START TEST nvme_simple_copy 00:10:02.884 ************************************ 00:10:02.884 05:57:25 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:03.143 Initializing NVMe Controllers 00:10:03.143 Attaching to 0000:00:10.0 00:10:03.143 Controller supports SCC. Attached to 0000:00:10.0 00:10:03.143 Namespace ID: 1 size: 6GB 00:10:03.143 Initialization complete. 00:10:03.143 00:10:03.143 Controller QEMU NVMe Ctrl (12340 ) 00:10:03.143 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:03.143 Namespace Block Size:4096 00:10:03.143 Writing LBAs 0 to 63 with Random Data 00:10:03.143 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:03.143 LBAs matching Written Data: 64 00:10:03.143 ************************************ 00:10:03.143 END TEST nvme_simple_copy 00:10:03.143 00:10:03.143 real 0m0.268s 00:10:03.143 user 0m0.098s 00:10:03.143 sys 0m0.067s 00:10:03.143 05:57:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.143 05:57:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:03.143 ************************************ 00:10:03.143 ************************************ 00:10:03.143 END TEST nvme_scc 00:10:03.143 ************************************ 00:10:03.143 00:10:03.143 real 0m8.260s 00:10:03.143 user 0m1.482s 00:10:03.143 sys 0m1.670s 00:10:03.143 05:57:26 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.143 05:57:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:03.143 05:57:26 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:03.143 05:57:26 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:03.143 05:57:26 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:03.143 05:57:26 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:03.143 05:57:26 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:03.143 05:57:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:03.143 05:57:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.143 05:57:26 -- common/autotest_common.sh@10 -- # set +x 00:10:03.143 ************************************ 00:10:03.143 START TEST nvme_fdp 00:10:03.143 ************************************ 00:10:03.143 05:57:26 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:03.403 * Looking for test storage... 00:10:03.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.403 --rc genhtml_branch_coverage=1 00:10:03.403 --rc genhtml_function_coverage=1 00:10:03.403 --rc genhtml_legend=1 00:10:03.403 --rc geninfo_all_blocks=1 00:10:03.403 --rc geninfo_unexecuted_blocks=1 00:10:03.403 00:10:03.403 ' 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.403 --rc genhtml_branch_coverage=1 00:10:03.403 --rc genhtml_function_coverage=1 00:10:03.403 --rc genhtml_legend=1 00:10:03.403 --rc geninfo_all_blocks=1 00:10:03.403 --rc geninfo_unexecuted_blocks=1 00:10:03.403 00:10:03.403 ' 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.403 --rc genhtml_branch_coverage=1 00:10:03.403 --rc genhtml_function_coverage=1 00:10:03.403 --rc genhtml_legend=1 00:10:03.403 --rc geninfo_all_blocks=1 00:10:03.403 --rc geninfo_unexecuted_blocks=1 00:10:03.403 00:10:03.403 ' 00:10:03.403 05:57:26 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.403 --rc genhtml_branch_coverage=1 00:10:03.403 --rc genhtml_function_coverage=1 00:10:03.403 --rc genhtml_legend=1 00:10:03.403 --rc geninfo_all_blocks=1 00:10:03.403 --rc geninfo_unexecuted_blocks=1 00:10:03.403 00:10:03.403 ' 00:10:03.403 05:57:26 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.403 05:57:26 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.403 05:57:26 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.403 05:57:26 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.403 05:57:26 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.403 05:57:26 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:03.403 05:57:26 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:03.403 05:57:26 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:03.403 05:57:26 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.403 05:57:26 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:03.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:03.969 Waiting for block devices as requested 00:10:03.969 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.228 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.228 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.228 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.504 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:09.504 05:57:32 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:09.504 05:57:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:09.504 05:57:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:09.504 05:57:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.504 05:57:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.504 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.505 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:09.506 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:09.507 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.508 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:09.509 05:57:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:09.509 05:57:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:09.509 05:57:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.509 05:57:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.509 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:09.510 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.511 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.512 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.513 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:09.514 05:57:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:09.514 05:57:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:09.514 05:57:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.514 05:57:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.514 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:09.515 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:09.780 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.781 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.782 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:09.783 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.784 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:09.785 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:09.786 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:09.787 05:57:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:09.787 05:57:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:09.787 05:57:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.787 05:57:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:09.787 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.788 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:09.789 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:09.790 05:57:32 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:09.790 05:57:32 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:09.790 05:57:32 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:10.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.927 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.927 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.927 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:11.186 05:57:34 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:11.186 05:57:34 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:11.186 05:57:34 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.186 05:57:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:11.186 ************************************ 00:10:11.186 START TEST nvme_flexible_data_placement 00:10:11.186 ************************************ 00:10:11.186 05:57:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:11.445 Initializing NVMe Controllers 00:10:11.445 Attaching to 0000:00:13.0 00:10:11.445 Controller supports FDP Attached to 0000:00:13.0 00:10:11.445 Namespace ID: 1 Endurance Group ID: 1 00:10:11.445 Initialization complete. 00:10:11.445 00:10:11.445 ================================== 00:10:11.445 == FDP tests for Namespace: #01 == 00:10:11.445 ================================== 00:10:11.445 00:10:11.445 Get Feature: FDP: 00:10:11.445 ================= 00:10:11.445 Enabled: Yes 00:10:11.445 FDP configuration Index: 0 00:10:11.445 00:10:11.445 FDP configurations log page 00:10:11.445 =========================== 00:10:11.445 Number of FDP configurations: 1 00:10:11.445 Version: 0 00:10:11.445 Size: 112 00:10:11.445 FDP Configuration Descriptor: 0 00:10:11.445 Descriptor Size: 96 00:10:11.445 Reclaim Group Identifier format: 2 00:10:11.445 FDP Volatile Write Cache: Not Present 00:10:11.445 FDP Configuration: Valid 00:10:11.445 Vendor Specific Size: 0 00:10:11.445 Number of Reclaim Groups: 2 00:10:11.445 Number of Recalim Unit Handles: 8 00:10:11.445 Max Placement Identifiers: 128 00:10:11.445 Number of Namespaces Suppprted: 256 00:10:11.445 Reclaim unit Nominal Size: 6000000 bytes 00:10:11.445 Estimated Reclaim Unit Time Limit: Not Reported 00:10:11.445 RUH Desc #000: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #001: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #002: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #003: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #004: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #005: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #006: RUH Type: Initially Isolated 00:10:11.445 RUH Desc #007: RUH Type: Initially Isolated 00:10:11.445 00:10:11.445 FDP reclaim unit handle usage log page 00:10:11.445 ====================================== 00:10:11.445 Number of Reclaim Unit Handles: 8 00:10:11.445 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:11.445 RUH Usage Desc #001: RUH Attributes: Unused 00:10:11.445 RUH Usage Desc #002: RUH Attributes: Unused 00:10:11.445 RUH Usage Desc #003: RUH Attributes: Unused 00:10:11.445 RUH Usage Desc #004: RUH Attributes: Unused 00:10:11.445 RUH Usage Desc #005: RUH Attributes: Unused 00:10:11.445 RUH Usage Desc #006: RUH Attributes: Unused 00:10:11.445 RUH Usage Desc #007: RUH Attributes: Unused 00:10:11.445 00:10:11.445 FDP statistics log page 00:10:11.445 ======================= 00:10:11.445 Host bytes with metadata written: 1721061376 00:10:11.445 Media bytes with metadata written: 1721323520 00:10:11.445 Media bytes erased: 0 00:10:11.445 00:10:11.445 FDP Reclaim unit handle status 00:10:11.445 ============================== 00:10:11.445 Number of RUHS descriptors: 2 00:10:11.445 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000056ab 00:10:11.445 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:11.445 00:10:11.445 FDP write on placement id: 0 success 00:10:11.445 00:10:11.445 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:11.445 00:10:11.445 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:11.445 00:10:11.445 Get Feature: FDP Events for Placement handle: #0 00:10:11.445 ======================== 00:10:11.445 Number of FDP Events: 6 00:10:11.445 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:11.445 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:11.445 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:11.445 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:11.445 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:11.445 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:11.445 00:10:11.445 FDP events log page 00:10:11.445 =================== 00:10:11.445 Number of FDP events: 1 00:10:11.445 FDP Event #0: 00:10:11.445 Event Type: RU Not Written to Capacity 00:10:11.445 Placement Identifier: Valid 00:10:11.445 NSID: Valid 00:10:11.445 Location: Valid 00:10:11.445 Placement Identifier: 0 00:10:11.445 Event Timestamp: 4 00:10:11.445 Namespace Identifier: 1 00:10:11.445 Reclaim Group Identifier: 0 00:10:11.445 Reclaim Unit Handle Identifier: 0 00:10:11.445 00:10:11.445 FDP test passed 00:10:11.445 00:10:11.445 real 0m0.246s 00:10:11.445 user 0m0.072s 00:10:11.445 sys 0m0.071s 00:10:11.445 ************************************ 00:10:11.445 END TEST nvme_flexible_data_placement 00:10:11.445 05:57:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.445 05:57:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:11.445 ************************************ 00:10:11.445 ************************************ 00:10:11.445 END TEST nvme_fdp 00:10:11.445 ************************************ 00:10:11.445 00:10:11.445 real 0m8.161s 00:10:11.445 user 0m1.388s 00:10:11.445 sys 0m1.669s 00:10:11.445 05:57:34 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.445 05:57:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:11.445 05:57:34 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:11.445 05:57:34 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:11.445 05:57:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.445 05:57:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.445 05:57:34 -- common/autotest_common.sh@10 -- # set +x 00:10:11.445 ************************************ 00:10:11.445 START TEST nvme_rpc 00:10:11.445 ************************************ 00:10:11.445 05:57:34 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:11.445 * Looking for test storage... 00:10:11.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:11.445 05:57:34 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.445 05:57:34 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.445 05:57:34 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.704 05:57:34 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.704 05:57:34 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:11.704 05:57:34 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.705 --rc genhtml_branch_coverage=1 00:10:11.705 --rc genhtml_function_coverage=1 00:10:11.705 --rc genhtml_legend=1 00:10:11.705 --rc geninfo_all_blocks=1 00:10:11.705 --rc geninfo_unexecuted_blocks=1 00:10:11.705 00:10:11.705 ' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.705 --rc genhtml_branch_coverage=1 00:10:11.705 --rc genhtml_function_coverage=1 00:10:11.705 --rc genhtml_legend=1 00:10:11.705 --rc geninfo_all_blocks=1 00:10:11.705 --rc geninfo_unexecuted_blocks=1 00:10:11.705 00:10:11.705 ' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.705 --rc genhtml_branch_coverage=1 00:10:11.705 --rc genhtml_function_coverage=1 00:10:11.705 --rc genhtml_legend=1 00:10:11.705 --rc geninfo_all_blocks=1 00:10:11.705 --rc geninfo_unexecuted_blocks=1 00:10:11.705 00:10:11.705 ' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.705 --rc genhtml_branch_coverage=1 00:10:11.705 --rc genhtml_function_coverage=1 00:10:11.705 --rc genhtml_legend=1 00:10:11.705 --rc geninfo_all_blocks=1 00:10:11.705 --rc geninfo_unexecuted_blocks=1 00:10:11.705 00:10:11.705 ' 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78621 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:11.705 05:57:34 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78621 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78621 ']' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.705 05:57:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.963 [2024-12-08 05:57:34.749409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:11.963 [2024-12-08 05:57:34.749603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78621 ] 00:10:11.964 [2024-12-08 05:57:34.898314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:11.964 [2024-12-08 05:57:34.944034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.964 [2024-12-08 05:57:34.944071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.900 05:57:35 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.900 05:57:35 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:12.900 05:57:35 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:13.158 Nvme0n1 00:10:13.159 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:13.159 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:13.418 request: 00:10:13.418 { 00:10:13.418 "bdev_name": "Nvme0n1", 00:10:13.418 "filename": "non_existing_file", 00:10:13.418 "method": "bdev_nvme_apply_firmware", 00:10:13.418 "req_id": 1 00:10:13.418 } 00:10:13.418 Got JSON-RPC error response 00:10:13.418 response: 00:10:13.418 { 00:10:13.418 "code": -32603, 00:10:13.418 "message": "open file failed." 00:10:13.418 } 00:10:13.418 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:13.418 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:13.418 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:13.677 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:13.677 05:57:36 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78621 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78621 ']' 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78621 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78621 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.677 killing process with pid 78621 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78621' 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78621 00:10:13.677 05:57:36 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78621 00:10:14.245 00:10:14.245 real 0m2.678s 00:10:14.245 user 0m5.461s 00:10:14.245 sys 0m0.596s 00:10:14.245 ************************************ 00:10:14.245 END TEST nvme_rpc 00:10:14.245 ************************************ 00:10:14.245 05:57:37 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.245 05:57:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.245 05:57:37 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:14.245 05:57:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.245 05:57:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.245 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:10:14.245 ************************************ 00:10:14.245 START TEST nvme_rpc_timeouts 00:10:14.245 ************************************ 00:10:14.245 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:14.245 * Looking for test storage... 00:10:14.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:14.245 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:14.245 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:14.245 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:14.245 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.245 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.246 05:57:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:14.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.246 --rc genhtml_branch_coverage=1 00:10:14.246 --rc genhtml_function_coverage=1 00:10:14.246 --rc genhtml_legend=1 00:10:14.246 --rc geninfo_all_blocks=1 00:10:14.246 --rc geninfo_unexecuted_blocks=1 00:10:14.246 00:10:14.246 ' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:14.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.246 --rc genhtml_branch_coverage=1 00:10:14.246 --rc genhtml_function_coverage=1 00:10:14.246 --rc genhtml_legend=1 00:10:14.246 --rc geninfo_all_blocks=1 00:10:14.246 --rc geninfo_unexecuted_blocks=1 00:10:14.246 00:10:14.246 ' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:14.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.246 --rc genhtml_branch_coverage=1 00:10:14.246 --rc genhtml_function_coverage=1 00:10:14.246 --rc genhtml_legend=1 00:10:14.246 --rc geninfo_all_blocks=1 00:10:14.246 --rc geninfo_unexecuted_blocks=1 00:10:14.246 00:10:14.246 ' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:14.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.246 --rc genhtml_branch_coverage=1 00:10:14.246 --rc genhtml_function_coverage=1 00:10:14.246 --rc genhtml_legend=1 00:10:14.246 --rc geninfo_all_blocks=1 00:10:14.246 --rc geninfo_unexecuted_blocks=1 00:10:14.246 00:10:14.246 ' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78681 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78681 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78713 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:14.246 05:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78713 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78713 ']' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.246 05:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:14.505 [2024-12-08 05:57:37.400454] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:14.505 [2024-12-08 05:57:37.400679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78713 ] 00:10:14.764 [2024-12-08 05:57:37.552397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:14.764 [2024-12-08 05:57:37.592974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.764 [2024-12-08 05:57:37.593032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.352 05:57:38 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.352 05:57:38 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:15.352 05:57:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:15.352 Checking default timeout settings: 00:10:15.352 05:57:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:15.926 05:57:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:15.926 Making settings changes with rpc: 00:10:15.926 05:57:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:16.184 Check default vs. modified settings: 00:10:16.184 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:16.184 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:16.751 Setting action_on_timeout is changed as expected. 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:16.751 Setting timeout_us is changed as expected. 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:16.751 Setting timeout_admin_us is changed as expected. 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78681 /tmp/settings_modified_78681 00:10:16.751 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78713 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78713 ']' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78713 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78713 00:10:16.751 killing process with pid 78713 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78713' 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78713 00:10:16.751 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78713 00:10:17.008 RPC TIMEOUT SETTING TEST PASSED. 00:10:17.008 05:57:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:17.008 00:10:17.008 real 0m2.815s 00:10:17.008 user 0m5.846s 00:10:17.009 sys 0m0.591s 00:10:17.009 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.009 ************************************ 00:10:17.009 05:57:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:17.009 END TEST nvme_rpc_timeouts 00:10:17.009 ************************************ 00:10:17.009 05:57:39 -- spdk/autotest.sh@239 -- # uname -s 00:10:17.009 05:57:39 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:17.009 05:57:39 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:17.009 05:57:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:17.009 05:57:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.009 05:57:39 -- common/autotest_common.sh@10 -- # set +x 00:10:17.009 ************************************ 00:10:17.009 START TEST sw_hotplug 00:10:17.009 ************************************ 00:10:17.009 05:57:39 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:17.009 * Looking for test storage... 00:10:17.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:17.009 05:57:40 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:17.009 05:57:40 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:17.009 05:57:40 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:17.266 05:57:40 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.267 05:57:40 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:17.267 05:57:40 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.267 05:57:40 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:17.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.267 --rc genhtml_branch_coverage=1 00:10:17.267 --rc genhtml_function_coverage=1 00:10:17.267 --rc genhtml_legend=1 00:10:17.267 --rc geninfo_all_blocks=1 00:10:17.267 --rc geninfo_unexecuted_blocks=1 00:10:17.267 00:10:17.267 ' 00:10:17.267 05:57:40 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:17.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.267 --rc genhtml_branch_coverage=1 00:10:17.267 --rc genhtml_function_coverage=1 00:10:17.267 --rc genhtml_legend=1 00:10:17.267 --rc geninfo_all_blocks=1 00:10:17.267 --rc geninfo_unexecuted_blocks=1 00:10:17.267 00:10:17.267 ' 00:10:17.267 05:57:40 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:17.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.267 --rc genhtml_branch_coverage=1 00:10:17.267 --rc genhtml_function_coverage=1 00:10:17.267 --rc genhtml_legend=1 00:10:17.267 --rc geninfo_all_blocks=1 00:10:17.267 --rc geninfo_unexecuted_blocks=1 00:10:17.267 00:10:17.267 ' 00:10:17.267 05:57:40 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:17.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.267 --rc genhtml_branch_coverage=1 00:10:17.267 --rc genhtml_function_coverage=1 00:10:17.267 --rc genhtml_legend=1 00:10:17.267 --rc geninfo_all_blocks=1 00:10:17.267 --rc geninfo_unexecuted_blocks=1 00:10:17.267 00:10:17.267 ' 00:10:17.267 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.840 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:17.840 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:17.840 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:17.840 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:17.840 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:17.840 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:17.840 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:17.840 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:17.840 05:57:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:17.841 05:57:40 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:17.841 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:17.841 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:17.841 05:57:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:18.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.359 Waiting for block devices as requested 00:10:18.359 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.617 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.617 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.617 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.888 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:23.888 05:57:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:23.888 05:57:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.147 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:24.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.147 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:24.715 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:24.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:24.974 05:57:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79564 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:24.974 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:24.975 05:57:47 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:24.975 05:57:47 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:24.975 05:57:47 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:24.975 05:57:47 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:24.975 05:57:47 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:24.975 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:24.975 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:24.975 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:24.975 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:24.975 05:57:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:25.233 Initializing NVMe Controllers 00:10:25.233 Attaching to 0000:00:10.0 00:10:25.233 Attaching to 0000:00:11.0 00:10:25.233 Attached to 0000:00:10.0 00:10:25.233 Attached to 0000:00:11.0 00:10:25.233 Initialization complete. Starting I/O... 00:10:25.233 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:25.233 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:25.233 00:10:26.170 QEMU NVMe Ctrl (12340 ): 1312 I/Os completed (+1312) 00:10:26.170 QEMU NVMe Ctrl (12341 ): 1388 I/Os completed (+1388) 00:10:26.170 00:10:27.244 QEMU NVMe Ctrl (12340 ): 3156 I/Os completed (+1844) 00:10:27.244 QEMU NVMe Ctrl (12341 ): 3297 I/Os completed (+1909) 00:10:27.244 00:10:28.179 QEMU NVMe Ctrl (12340 ): 5252 I/Os completed (+2096) 00:10:28.179 QEMU NVMe Ctrl (12341 ): 5420 I/Os completed (+2123) 00:10:28.179 00:10:29.114 QEMU NVMe Ctrl (12340 ): 7372 I/Os completed (+2120) 00:10:29.114 QEMU NVMe Ctrl (12341 ): 7568 I/Os completed (+2148) 00:10:29.114 00:10:30.494 QEMU NVMe Ctrl (12340 ): 9511 I/Os completed (+2139) 00:10:30.494 QEMU NVMe Ctrl (12341 ): 9736 I/Os completed (+2168) 00:10:30.494 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:31.061 [2024-12-08 05:57:53.926177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:31.061 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:31.061 [2024-12-08 05:57:53.928133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.928228] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.928258] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.928287] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:31.061 [2024-12-08 05:57:53.930675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.930729] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.930756] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.930789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:31.061 [2024-12-08 05:57:53.953885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:31.061 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:31.061 [2024-12-08 05:57:53.955667] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.955726] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.955756] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.955779] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:31.061 [2024-12-08 05:57:53.957764] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.957809] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.957841] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 [2024-12-08 05:57:53.957862] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.061 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:31.061 EAL: Scan for (pci) bus failed. 00:10:31.061 05:57:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:31.061 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.061 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.061 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:31.320 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:31.320 Attaching to 0000:00:10.0 00:10:31.320 Attached to 0000:00:10.0 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.320 05:57:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:31.320 Attaching to 0000:00:11.0 00:10:31.320 Attached to 0000:00:11.0 00:10:32.255 QEMU NVMe Ctrl (12340 ): 2052 I/Os completed (+2052) 00:10:32.255 QEMU NVMe Ctrl (12341 ): 1880 I/Os completed (+1880) 00:10:32.255 00:10:33.191 QEMU NVMe Ctrl (12340 ): 4168 I/Os completed (+2116) 00:10:33.191 QEMU NVMe Ctrl (12341 ): 4012 I/Os completed (+2132) 00:10:33.191 00:10:34.128 QEMU NVMe Ctrl (12340 ): 6336 I/Os completed (+2168) 00:10:34.128 QEMU NVMe Ctrl (12341 ): 6227 I/Os completed (+2215) 00:10:34.128 00:10:35.506 QEMU NVMe Ctrl (12340 ): 8540 I/Os completed (+2204) 00:10:35.506 QEMU NVMe Ctrl (12341 ): 8448 I/Os completed (+2221) 00:10:35.506 00:10:36.440 QEMU NVMe Ctrl (12340 ): 10744 I/Os completed (+2204) 00:10:36.440 QEMU NVMe Ctrl (12341 ): 10676 I/Os completed (+2228) 00:10:36.440 00:10:37.374 QEMU NVMe Ctrl (12340 ): 12868 I/Os completed (+2124) 00:10:37.374 QEMU NVMe Ctrl (12341 ): 12845 I/Os completed (+2169) 00:10:37.374 00:10:38.309 QEMU NVMe Ctrl (12340 ): 14968 I/Os completed (+2100) 00:10:38.309 QEMU NVMe Ctrl (12341 ): 14996 I/Os completed (+2151) 00:10:38.309 00:10:39.256 QEMU NVMe Ctrl (12340 ): 17180 I/Os completed (+2212) 00:10:39.256 QEMU NVMe Ctrl (12341 ): 17219 I/Os completed (+2223) 00:10:39.256 00:10:40.191 QEMU NVMe Ctrl (12340 ): 19148 I/Os completed (+1968) 00:10:40.191 QEMU NVMe Ctrl (12341 ): 19212 I/Os completed (+1993) 00:10:40.191 00:10:41.124 QEMU NVMe Ctrl (12340 ): 21104 I/Os completed (+1956) 00:10:41.124 QEMU NVMe Ctrl (12341 ): 21235 I/Os completed (+2023) 00:10:41.124 00:10:42.501 QEMU NVMe Ctrl (12340 ): 23228 I/Os completed (+2124) 00:10:42.501 QEMU NVMe Ctrl (12341 ): 23383 I/Os completed (+2148) 00:10:42.501 00:10:43.436 QEMU NVMe Ctrl (12340 ): 25385 I/Os completed (+2157) 00:10:43.436 QEMU NVMe Ctrl (12341 ): 25586 I/Os completed (+2203) 00:10:43.436 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.436 [2024-12-08 05:58:06.241880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:43.436 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:43.436 [2024-12-08 05:58:06.243515] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.243583] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.243605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.243631] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:43.436 [2024-12-08 05:58:06.245553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.245598] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.245620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.245641] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.436 [2024-12-08 05:58:06.267032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:43.436 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:43.436 [2024-12-08 05:58:06.268528] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.268577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.268602] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.268621] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:43.436 [2024-12-08 05:58:06.270171] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.270224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.270252] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 [2024-12-08 05:58:06.270270] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:43.436 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:43.436 EAL: Scan for (pci) bus failed. 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.436 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:43.436 Attaching to 0000:00:10.0 00:10:43.436 Attached to 0000:00:10.0 00:10:43.695 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:43.695 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.695 05:58:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:43.695 Attaching to 0000:00:11.0 00:10:43.695 Attached to 0000:00:11.0 00:10:44.263 QEMU NVMe Ctrl (12340 ): 1383 I/Os completed (+1383) 00:10:44.263 QEMU NVMe Ctrl (12341 ): 1187 I/Os completed (+1187) 00:10:44.263 00:10:45.201 QEMU NVMe Ctrl (12340 ): 3503 I/Os completed (+2120) 00:10:45.201 QEMU NVMe Ctrl (12341 ): 3318 I/Os completed (+2131) 00:10:45.201 00:10:46.139 QEMU NVMe Ctrl (12340 ): 5475 I/Os completed (+1972) 00:10:46.139 QEMU NVMe Ctrl (12341 ): 5318 I/Os completed (+2000) 00:10:46.139 00:10:47.153 QEMU NVMe Ctrl (12340 ): 7470 I/Os completed (+1995) 00:10:47.153 QEMU NVMe Ctrl (12341 ): 7370 I/Os completed (+2052) 00:10:47.153 00:10:48.091 QEMU NVMe Ctrl (12340 ): 9374 I/Os completed (+1904) 00:10:48.091 QEMU NVMe Ctrl (12341 ): 9375 I/Os completed (+2005) 00:10:48.091 00:10:49.472 QEMU NVMe Ctrl (12340 ): 11454 I/Os completed (+2080) 00:10:49.472 QEMU NVMe Ctrl (12341 ): 11491 I/Os completed (+2116) 00:10:49.472 00:10:50.409 QEMU NVMe Ctrl (12340 ): 13538 I/Os completed (+2084) 00:10:50.409 QEMU NVMe Ctrl (12341 ): 13605 I/Os completed (+2114) 00:10:50.409 00:10:51.346 QEMU NVMe Ctrl (12340 ): 15698 I/Os completed (+2160) 00:10:51.346 QEMU NVMe Ctrl (12341 ): 15778 I/Os completed (+2173) 00:10:51.346 00:10:52.284 QEMU NVMe Ctrl (12340 ): 17722 I/Os completed (+2024) 00:10:52.284 QEMU NVMe Ctrl (12341 ): 17915 I/Os completed (+2137) 00:10:52.284 00:10:53.220 QEMU NVMe Ctrl (12340 ): 19826 I/Os completed (+2104) 00:10:53.220 QEMU NVMe Ctrl (12341 ): 20069 I/Os completed (+2154) 00:10:53.220 00:10:54.157 QEMU NVMe Ctrl (12340 ): 22018 I/Os completed (+2192) 00:10:54.157 QEMU NVMe Ctrl (12341 ): 22271 I/Os completed (+2202) 00:10:54.157 00:10:55.090 QEMU NVMe Ctrl (12340 ): 24122 I/Os completed (+2104) 00:10:55.090 QEMU NVMe Ctrl (12341 ): 24435 I/Os completed (+2164) 00:10:55.090 00:10:55.656 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:55.656 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.656 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.656 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.656 [2024-12-08 05:58:18.573419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:55.656 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:55.656 [2024-12-08 05:58:18.574975] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.656 [2024-12-08 05:58:18.575030] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.656 [2024-12-08 05:58:18.575053] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.656 [2024-12-08 05:58:18.575081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.656 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:55.656 [2024-12-08 05:58:18.576836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.576883] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.576905] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.576926] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.657 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.657 [2024-12-08 05:58:18.602941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:55.657 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:55.657 [2024-12-08 05:58:18.604401] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.604452] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.604478] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.604497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:55.657 [2024-12-08 05:58:18.606022] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.606065] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.606089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 [2024-12-08 05:58:18.606108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.657 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:55.657 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.657 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:55.657 EAL: Scan for (pci) bus failed. 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.915 Attaching to 0000:00:10.0 00:10:55.915 Attached to 0000:00:10.0 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.915 05:58:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:55.915 Attaching to 0000:00:11.0 00:10:55.915 Attached to 0000:00:11.0 00:10:55.915 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:55.915 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:55.915 [2024-12-08 05:58:18.904864] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:08.144 05:58:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:08.144 05:58:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.144 05:58:30 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.98 00:11:08.144 05:58:30 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.98 00:11:08.144 05:58:30 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:08.145 05:58:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.98 00:11:08.145 05:58:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.98 2 00:11:08.145 remove_attach_helper took 42.98s to complete (handling 2 nvme drive(s)) 05:58:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79564 00:11:14.706 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79564) - No such process 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79564 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80107 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:14.706 05:58:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80107 00:11:14.706 05:58:36 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 80107 ']' 00:11:14.706 05:58:36 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.706 05:58:36 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.706 05:58:36 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.706 05:58:36 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.706 05:58:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.706 [2024-12-08 05:58:37.025851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:14.706 [2024-12-08 05:58:37.026071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80107 ] 00:11:14.706 [2024-12-08 05:58:37.170149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.706 [2024-12-08 05:58:37.205685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:14.965 05:58:37 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:14.965 05:58:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.550 05:58:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.550 05:58:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.550 [2024-12-08 05:58:44.085910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:21.550 [2024-12-08 05:58:44.088510] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.088561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.088587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 [2024-12-08 05:58:44.088618] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.088636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.088650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 [2024-12-08 05:58:44.088669] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.088682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.088698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 [2024-12-08 05:58:44.088711] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.088726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.088739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 05:58:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:21.550 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:21.550 [2024-12-08 05:58:44.585942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:21.550 [2024-12-08 05:58:44.588772] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.588850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.588871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 [2024-12-08 05:58:44.588908] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.588922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.588937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 [2024-12-08 05:58:44.588951] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.588965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.588978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.550 [2024-12-08 05:58:44.589011] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.550 [2024-12-08 05:58:44.589024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.550 [2024-12-08 05:58:44.589040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.812 05:58:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.812 05:58:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.812 05:58:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.812 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.070 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.071 05:58:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:34.279 05:58:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:34.279 05:58:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:34.279 05:58:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:34.280 05:58:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.280 05:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.280 05:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.280 05:58:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.280 05:58:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.280 05:58:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.280 [2024-12-08 05:58:57.086063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:34.280 [2024-12-08 05:58:57.088755] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.280 [2024-12-08 05:58:57.088804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.280 [2024-12-08 05:58:57.088826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.280 [2024-12-08 05:58:57.088849] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.280 [2024-12-08 05:58:57.088865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.280 [2024-12-08 05:58:57.088880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.280 [2024-12-08 05:58:57.088895] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.280 [2024-12-08 05:58:57.088908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.280 [2024-12-08 05:58:57.088923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.280 [2024-12-08 05:58:57.088936] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.280 [2024-12-08 05:58:57.088953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.280 [2024-12-08 05:58:57.088967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.280 05:58:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.280 05:58:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.280 05:58:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:34.280 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.540 [2024-12-08 05:58:57.486071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:34.540 [2024-12-08 05:58:57.488550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.540 [2024-12-08 05:58:57.488619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.540 [2024-12-08 05:58:57.488639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.540 [2024-12-08 05:58:57.488661] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.540 [2024-12-08 05:58:57.488674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.540 [2024-12-08 05:58:57.488687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.540 [2024-12-08 05:58:57.488700] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.540 [2024-12-08 05:58:57.488713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.540 [2024-12-08 05:58:57.488724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.540 [2024-12-08 05:58:57.488754] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.540 [2024-12-08 05:58:57.488765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.540 [2024-12-08 05:58:57.488778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.804 05:58:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.804 05:58:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.804 05:58:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.804 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.063 05:58:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.265 05:59:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:47.265 05:59:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:47.265 05:59:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.265 05:59:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.265 05:59:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.265 05:59:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.265 [2024-12-08 05:59:10.086243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:47.265 [2024-12-08 05:59:10.089251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.265 [2024-12-08 05:59:10.089318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.265 [2024-12-08 05:59:10.089346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.265 [2024-12-08 05:59:10.089368] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.265 [2024-12-08 05:59:10.089386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.265 [2024-12-08 05:59:10.089400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.265 [2024-12-08 05:59:10.089415] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.265 [2024-12-08 05:59:10.089428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.265 [2024-12-08 05:59:10.089446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.265 [2024-12-08 05:59:10.089460] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.265 [2024-12-08 05:59:10.089475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.265 [2024-12-08 05:59:10.089489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.265 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.266 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.266 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.266 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.266 05:59:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.266 05:59:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.266 05:59:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.266 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:47.266 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.525 [2024-12-08 05:59:10.486203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:47.525 [2024-12-08 05:59:10.488658] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.525 [2024-12-08 05:59:10.488742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.525 [2024-12-08 05:59:10.488761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.525 [2024-12-08 05:59:10.488796] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.525 [2024-12-08 05:59:10.488809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.525 [2024-12-08 05:59:10.488824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.525 [2024-12-08 05:59:10.488836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.525 [2024-12-08 05:59:10.488851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.525 [2024-12-08 05:59:10.488862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.525 [2024-12-08 05:59:10.488876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.525 [2024-12-08 05:59:10.488887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.525 [2024-12-08 05:59:10.488899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.785 05:59:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.785 05:59:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.785 05:59:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:47.785 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.044 05:59:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.044 05:59:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.044 05:59:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.044 05:59:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.10 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.10 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.10 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.10 2 00:12:00.246 remove_attach_helper took 45.10s to complete (handling 2 nvme drive(s)) 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:00.246 05:59:23 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:00.246 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.815 05:59:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.815 05:59:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.815 05:59:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.815 [2024-12-08 05:59:29.218008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:06.815 [2024-12-08 05:59:29.219666] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.219718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.219742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 [2024-12-08 05:59:29.219764] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.219781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 [2024-12-08 05:59:29.219811] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.219825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.219847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 [2024-12-08 05:59:29.219861] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.219876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.219889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.815 [2024-12-08 05:59:29.618002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:06.815 [2024-12-08 05:59:29.619672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.619751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.619770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 [2024-12-08 05:59:29.619789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.619803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.619818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 [2024-12-08 05:59:29.619831] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.619845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.619857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 [2024-12-08 05:59:29.619870] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.815 [2024-12-08 05:59:29.619882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.815 [2024-12-08 05:59:29.619898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.815 05:59:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.815 05:59:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.815 05:59:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:06.815 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.073 05:59:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.073 05:59:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.073 05:59:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.073 05:59:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.330 05:59:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.330 05:59:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.330 05:59:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.330 05:59:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.330 05:59:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.330 [2024-12-08 05:59:42.218176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:19.330 [2024-12-08 05:59:42.219845] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.330 [2024-12-08 05:59:42.219924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.330 [2024-12-08 05:59:42.219949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.330 [2024-12-08 05:59:42.219970] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.330 [2024-12-08 05:59:42.219986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.330 [2024-12-08 05:59:42.220000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.330 [2024-12-08 05:59:42.220015] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.330 [2024-12-08 05:59:42.220028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.330 [2024-12-08 05:59:42.220043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.330 [2024-12-08 05:59:42.220055] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.330 [2024-12-08 05:59:42.220070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.330 [2024-12-08 05:59:42.220087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.330 05:59:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:19.330 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:19.588 [2024-12-08 05:59:42.618190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:19.588 [2024-12-08 05:59:42.619851] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.588 [2024-12-08 05:59:42.619934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.588 [2024-12-08 05:59:42.619953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.588 [2024-12-08 05:59:42.619973] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.588 [2024-12-08 05:59:42.619987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.589 [2024-12-08 05:59:42.620001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.589 [2024-12-08 05:59:42.620015] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.589 [2024-12-08 05:59:42.620029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.589 [2024-12-08 05:59:42.620041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.589 [2024-12-08 05:59:42.620055] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.589 [2024-12-08 05:59:42.620067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.589 [2024-12-08 05:59:42.620081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.846 05:59:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.846 05:59:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.846 05:59:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:19.846 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:20.112 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.112 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.112 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:20.112 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:20.112 05:59:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.112 05:59:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.112 05:59:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.112 05:59:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:20.112 05:59:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:20.112 05:59:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.112 05:59:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.336 05:59:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.336 05:59:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.336 05:59:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.336 05:59:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.336 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.336 [2024-12-08 05:59:55.218391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:32.336 05:59:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.336 [2024-12-08 05:59:55.219980] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.336 [2024-12-08 05:59:55.220034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.336 [2024-12-08 05:59:55.220058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.336 [2024-12-08 05:59:55.220080] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.336 [2024-12-08 05:59:55.220100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.336 [2024-12-08 05:59:55.220114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.336 [2024-12-08 05:59:55.220130] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.336 [2024-12-08 05:59:55.220142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.336 [2024-12-08 05:59:55.220158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.337 [2024-12-08 05:59:55.220171] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.337 [2024-12-08 05:59:55.220203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.337 [2024-12-08 05:59:55.220220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.337 05:59:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.337 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:32.337 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:32.903 [2024-12-08 05:59:55.718393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:32.903 [2024-12-08 05:59:55.720036] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.903 [2024-12-08 05:59:55.720118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.903 [2024-12-08 05:59:55.720138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.903 [2024-12-08 05:59:55.720158] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.903 [2024-12-08 05:59:55.720171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.903 [2024-12-08 05:59:55.720185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.903 [2024-12-08 05:59:55.720224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.903 [2024-12-08 05:59:55.720246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.903 [2024-12-08 05:59:55.720260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.903 [2024-12-08 05:59:55.720274] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.903 [2024-12-08 05:59:55.720286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.903 [2024-12-08 05:59:55.720299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.903 05:59:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.903 05:59:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.903 05:59:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:32.903 05:59:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.161 05:59:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.04 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.04 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.04 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.04 2 00:12:45.375 remove_attach_helper took 45.04s to complete (handling 2 nvme drive(s)) 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:45.375 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80107 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 80107 ']' 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 80107 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80107 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.375 killing process with pid 80107 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80107' 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@969 -- # kill 80107 00:12:45.375 06:00:08 sw_hotplug -- common/autotest_common.sh@974 -- # wait 80107 00:12:45.634 06:00:08 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:46.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:46.457 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:46.457 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:46.714 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:46.714 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:46.714 00:12:46.714 real 2m29.646s 00:12:46.714 user 1m48.769s 00:12:46.714 sys 0m20.434s 00:12:46.714 06:00:09 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.714 ************************************ 00:12:46.714 END TEST sw_hotplug 00:12:46.714 ************************************ 00:12:46.714 06:00:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.714 06:00:09 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:46.714 06:00:09 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:46.714 06:00:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:46.714 06:00:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.714 06:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:46.714 ************************************ 00:12:46.714 START TEST nvme_xnvme 00:12:46.714 ************************************ 00:12:46.714 06:00:09 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:46.714 * Looking for test storage... 00:12:46.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:46.714 06:00:09 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:46.714 06:00:09 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:46.714 06:00:09 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:46.972 06:00:09 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.972 06:00:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:46.972 06:00:09 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.972 06:00:09 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.972 --rc genhtml_branch_coverage=1 00:12:46.972 --rc genhtml_function_coverage=1 00:12:46.972 --rc genhtml_legend=1 00:12:46.972 --rc geninfo_all_blocks=1 00:12:46.972 --rc geninfo_unexecuted_blocks=1 00:12:46.972 00:12:46.972 ' 00:12:46.972 06:00:09 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.972 --rc genhtml_branch_coverage=1 00:12:46.972 --rc genhtml_function_coverage=1 00:12:46.973 --rc genhtml_legend=1 00:12:46.973 --rc geninfo_all_blocks=1 00:12:46.973 --rc geninfo_unexecuted_blocks=1 00:12:46.973 00:12:46.973 ' 00:12:46.973 06:00:09 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.973 --rc genhtml_branch_coverage=1 00:12:46.973 --rc genhtml_function_coverage=1 00:12:46.973 --rc genhtml_legend=1 00:12:46.973 --rc geninfo_all_blocks=1 00:12:46.973 --rc geninfo_unexecuted_blocks=1 00:12:46.973 00:12:46.973 ' 00:12:46.973 06:00:09 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.973 --rc genhtml_branch_coverage=1 00:12:46.973 --rc genhtml_function_coverage=1 00:12:46.973 --rc genhtml_legend=1 00:12:46.973 --rc geninfo_all_blocks=1 00:12:46.973 --rc geninfo_unexecuted_blocks=1 00:12:46.973 00:12:46.973 ' 00:12:46.973 06:00:09 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.973 06:00:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.973 06:00:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.973 06:00:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.973 06:00:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.973 06:00:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.973 06:00:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.973 06:00:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.973 06:00:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:46.973 06:00:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.973 06:00:09 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:46.973 06:00:09 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:46.973 06:00:09 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.973 06:00:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.973 ************************************ 00:12:46.973 START TEST xnvme_to_malloc_dd_copy 00:12:46.973 ************************************ 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:46.973 06:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:46.973 { 00:12:46.973 "subsystems": [ 00:12:46.973 { 00:12:46.973 "subsystem": "bdev", 00:12:46.973 "config": [ 00:12:46.973 { 00:12:46.973 "params": { 00:12:46.973 "block_size": 512, 00:12:46.973 "num_blocks": 2097152, 00:12:46.973 "name": "malloc0" 00:12:46.973 }, 00:12:46.973 "method": "bdev_malloc_create" 00:12:46.973 }, 00:12:46.973 { 00:12:46.973 "params": { 00:12:46.973 "io_mechanism": "libaio", 00:12:46.973 "filename": "/dev/nullb0", 00:12:46.973 "name": "null0" 00:12:46.973 }, 00:12:46.973 "method": "bdev_xnvme_create" 00:12:46.973 }, 00:12:46.973 { 00:12:46.973 "method": "bdev_wait_for_examine" 00:12:46.973 } 00:12:46.973 ] 00:12:46.973 } 00:12:46.973 ] 00:12:46.973 } 00:12:46.973 [2024-12-08 06:00:09.980349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:46.973 [2024-12-08 06:00:09.980534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81465 ] 00:12:47.230 [2024-12-08 06:00:10.131909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.230 [2024-12-08 06:00:10.175745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.603  [2024-12-08T06:00:12.581Z] Copying: 171/1024 [MB] (171 MBps) [2024-12-08T06:00:13.514Z] Copying: 353/1024 [MB] (181 MBps) [2024-12-08T06:00:14.886Z] Copying: 533/1024 [MB] (180 MBps) [2024-12-08T06:00:15.818Z] Copying: 715/1024 [MB] (181 MBps) [2024-12-08T06:00:16.382Z] Copying: 896/1024 [MB] (180 MBps) [2024-12-08T06:00:16.639Z] Copying: 1024/1024 [MB] (average 180 MBps) 00:12:53.594 00:12:53.594 06:00:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:53.594 06:00:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:53.594 06:00:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:53.594 06:00:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.594 { 00:12:53.594 "subsystems": [ 00:12:53.594 { 00:12:53.594 "subsystem": "bdev", 00:12:53.594 "config": [ 00:12:53.594 { 00:12:53.594 "params": { 00:12:53.594 "block_size": 512, 00:12:53.594 "num_blocks": 2097152, 00:12:53.594 "name": "malloc0" 00:12:53.594 }, 00:12:53.594 "method": "bdev_malloc_create" 00:12:53.594 }, 00:12:53.594 { 00:12:53.594 "params": { 00:12:53.594 "io_mechanism": "libaio", 00:12:53.594 "filename": "/dev/nullb0", 00:12:53.594 "name": "null0" 00:12:53.594 }, 00:12:53.594 "method": "bdev_xnvme_create" 00:12:53.594 }, 00:12:53.594 { 00:12:53.594 "method": "bdev_wait_for_examine" 00:12:53.594 } 00:12:53.594 ] 00:12:53.594 } 00:12:53.594 ] 00:12:53.594 } 00:12:53.851 [2024-12-08 06:00:16.660096] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:53.851 [2024-12-08 06:00:16.660283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81547 ] 00:12:53.851 [2024-12-08 06:00:16.806786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.851 [2024-12-08 06:00:16.841760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.243  [2024-12-08T06:00:19.219Z] Copying: 180/1024 [MB] (180 MBps) [2024-12-08T06:00:20.154Z] Copying: 364/1024 [MB] (183 MBps) [2024-12-08T06:00:21.089Z] Copying: 543/1024 [MB] (179 MBps) [2024-12-08T06:00:22.460Z] Copying: 725/1024 [MB] (181 MBps) [2024-12-08T06:00:23.024Z] Copying: 906/1024 [MB] (181 MBps) [2024-12-08T06:00:23.282Z] Copying: 1024/1024 [MB] (average 181 MBps) 00:13:00.237 00:13:00.237 06:00:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:00.237 06:00:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:00.237 06:00:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:00.237 06:00:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:00.237 06:00:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:00.237 06:00:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:00.237 { 00:13:00.237 "subsystems": [ 00:13:00.237 { 00:13:00.237 "subsystem": "bdev", 00:13:00.237 "config": [ 00:13:00.237 { 00:13:00.237 "params": { 00:13:00.237 "block_size": 512, 00:13:00.237 "num_blocks": 2097152, 00:13:00.237 "name": "malloc0" 00:13:00.237 }, 00:13:00.237 "method": "bdev_malloc_create" 00:13:00.237 }, 00:13:00.237 { 00:13:00.237 "params": { 00:13:00.237 "io_mechanism": "io_uring", 00:13:00.237 "filename": "/dev/nullb0", 00:13:00.237 "name": "null0" 00:13:00.237 }, 00:13:00.237 "method": "bdev_xnvme_create" 00:13:00.237 }, 00:13:00.237 { 00:13:00.238 "method": "bdev_wait_for_examine" 00:13:00.238 } 00:13:00.238 ] 00:13:00.238 } 00:13:00.238 ] 00:13:00.238 } 00:13:00.238 [2024-12-08 06:00:23.247141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:00.238 [2024-12-08 06:00:23.247355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81623 ] 00:13:00.495 [2024-12-08 06:00:23.395015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.495 [2024-12-08 06:00:23.432427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.870  [2024-12-08T06:00:25.852Z] Copying: 186/1024 [MB] (186 MBps) [2024-12-08T06:00:26.785Z] Copying: 380/1024 [MB] (194 MBps) [2024-12-08T06:00:27.718Z] Copying: 572/1024 [MB] (191 MBps) [2024-12-08T06:00:29.092Z] Copying: 767/1024 [MB] (195 MBps) [2024-12-08T06:00:29.092Z] Copying: 961/1024 [MB] (194 MBps) [2024-12-08T06:00:29.659Z] Copying: 1024/1024 [MB] (average 192 MBps) 00:13:06.614 00:13:06.614 06:00:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:06.614 06:00:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:06.614 06:00:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:06.614 06:00:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:06.614 { 00:13:06.614 "subsystems": [ 00:13:06.614 { 00:13:06.614 "subsystem": "bdev", 00:13:06.614 "config": [ 00:13:06.614 { 00:13:06.614 "params": { 00:13:06.614 "block_size": 512, 00:13:06.614 "num_blocks": 2097152, 00:13:06.614 "name": "malloc0" 00:13:06.614 }, 00:13:06.614 "method": "bdev_malloc_create" 00:13:06.614 }, 00:13:06.614 { 00:13:06.614 "params": { 00:13:06.614 "io_mechanism": "io_uring", 00:13:06.614 "filename": "/dev/nullb0", 00:13:06.614 "name": "null0" 00:13:06.614 }, 00:13:06.614 "method": "bdev_xnvme_create" 00:13:06.614 }, 00:13:06.614 { 00:13:06.614 "method": "bdev_wait_for_examine" 00:13:06.614 } 00:13:06.614 ] 00:13:06.614 } 00:13:06.614 ] 00:13:06.614 } 00:13:06.614 [2024-12-08 06:00:29.471376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:06.614 [2024-12-08 06:00:29.471554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81694 ] 00:13:06.614 [2024-12-08 06:00:29.618887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.614 [2024-12-08 06:00:29.655704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.988  [2024-12-08T06:00:31.965Z] Copying: 193/1024 [MB] (193 MBps) [2024-12-08T06:00:32.898Z] Copying: 386/1024 [MB] (193 MBps) [2024-12-08T06:00:34.270Z] Copying: 581/1024 [MB] (194 MBps) [2024-12-08T06:00:35.211Z] Copying: 772/1024 [MB] (191 MBps) [2024-12-08T06:00:35.211Z] Copying: 965/1024 [MB] (193 MBps) [2024-12-08T06:00:35.777Z] Copying: 1024/1024 [MB] (average 192 MBps) 00:13:12.732 00:13:12.732 06:00:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:12.732 06:00:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:12.732 00:13:12.732 real 0m25.763s 00:13:12.732 user 0m20.879s 00:13:12.732 sys 0m4.376s 00:13:12.732 06:00:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.732 06:00:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:12.732 ************************************ 00:13:12.732 END TEST xnvme_to_malloc_dd_copy 00:13:12.732 ************************************ 00:13:12.732 06:00:35 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.732 06:00:35 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:12.732 06:00:35 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.732 06:00:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.732 ************************************ 00:13:12.732 START TEST xnvme_bdevperf 00:13:12.732 ************************************ 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:12.732 06:00:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:12.732 { 00:13:12.732 "subsystems": [ 00:13:12.732 { 00:13:12.732 "subsystem": "bdev", 00:13:12.732 "config": [ 00:13:12.732 { 00:13:12.732 "params": { 00:13:12.732 "io_mechanism": "libaio", 00:13:12.732 "filename": "/dev/nullb0", 00:13:12.732 "name": "null0" 00:13:12.732 }, 00:13:12.732 "method": "bdev_xnvme_create" 00:13:12.732 }, 00:13:12.732 { 00:13:12.732 "method": "bdev_wait_for_examine" 00:13:12.732 } 00:13:12.732 ] 00:13:12.732 } 00:13:12.732 ] 00:13:12.732 } 00:13:12.990 [2024-12-08 06:00:35.782511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:12.990 [2024-12-08 06:00:35.782689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81793 ] 00:13:12.991 [2024-12-08 06:00:35.923814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.991 [2024-12-08 06:00:35.961475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.248 Running I/O for 5 seconds... 00:13:15.177 127680.00 IOPS, 498.75 MiB/s [2024-12-08T06:00:39.170Z] 126464.00 IOPS, 494.00 MiB/s [2024-12-08T06:00:40.105Z] 126314.67 IOPS, 493.42 MiB/s [2024-12-08T06:00:41.482Z] 126144.00 IOPS, 492.75 MiB/s 00:13:18.437 Latency(us) 00:13:18.437 [2024-12-08T06:00:41.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.437 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.437 null0 : 5.00 126346.58 493.54 0.00 0.00 503.43 212.25 2323.55 00:13:18.437 [2024-12-08T06:00:41.482Z] =================================================================================================================== 00:13:18.437 [2024-12-08T06:00:41.482Z] Total : 126346.58 493.54 0.00 0.00 503.43 212.25 2323.55 00:13:18.437 06:00:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:18.437 06:00:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:18.437 06:00:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:18.437 06:00:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:18.437 06:00:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:18.437 06:00:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.437 { 00:13:18.437 "subsystems": [ 00:13:18.437 { 00:13:18.437 "subsystem": "bdev", 00:13:18.437 "config": [ 00:13:18.437 { 00:13:18.437 "params": { 00:13:18.437 "io_mechanism": "io_uring", 00:13:18.437 "filename": "/dev/nullb0", 00:13:18.437 "name": "null0" 00:13:18.437 }, 00:13:18.437 "method": "bdev_xnvme_create" 00:13:18.437 }, 00:13:18.437 { 00:13:18.437 "method": "bdev_wait_for_examine" 00:13:18.437 } 00:13:18.437 ] 00:13:18.437 } 00:13:18.437 ] 00:13:18.437 } 00:13:18.437 [2024-12-08 06:00:41.371170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:18.437 [2024-12-08 06:00:41.371396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81862 ] 00:13:18.696 [2024-12-08 06:00:41.518476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.696 [2024-12-08 06:00:41.553915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.696 Running I/O for 5 seconds... 00:13:21.008 162432.00 IOPS, 634.50 MiB/s [2024-12-08T06:00:44.989Z] 163776.00 IOPS, 639.75 MiB/s [2024-12-08T06:00:45.926Z] 163178.67 IOPS, 637.42 MiB/s [2024-12-08T06:00:46.860Z] 163744.00 IOPS, 639.62 MiB/s 00:13:23.815 Latency(us) 00:13:23.815 [2024-12-08T06:00:46.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.815 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:23.815 null0 : 5.00 163405.69 638.30 0.00 0.00 388.63 212.25 2070.34 00:13:23.815 [2024-12-08T06:00:46.860Z] =================================================================================================================== 00:13:23.815 [2024-12-08T06:00:46.860Z] Total : 163405.69 638.30 0.00 0.00 388.63 212.25 2070.34 00:13:23.815 06:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:23.815 06:00:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:23.815 00:13:23.815 real 0m11.172s 00:13:23.815 user 0m8.349s 00:13:23.815 sys 0m2.597s 00:13:23.815 06:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.815 06:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.815 ************************************ 00:13:23.815 END TEST xnvme_bdevperf 00:13:23.815 ************************************ 00:13:24.074 00:13:24.074 real 0m37.229s 00:13:24.074 user 0m29.385s 00:13:24.074 sys 0m7.114s 00:13:24.074 06:00:46 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.074 06:00:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.074 ************************************ 00:13:24.074 END TEST nvme_xnvme 00:13:24.074 ************************************ 00:13:24.074 06:00:46 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:24.074 06:00:46 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.074 06:00:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.074 06:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:24.074 ************************************ 00:13:24.074 START TEST blockdev_xnvme 00:13:24.074 ************************************ 00:13:24.074 06:00:46 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:24.074 * Looking for test storage... 00:13:24.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:24.074 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:24.074 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:24.074 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:24.074 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.074 06:00:47 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.332 06:00:47 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:24.332 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.332 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.332 --rc genhtml_branch_coverage=1 00:13:24.332 --rc genhtml_function_coverage=1 00:13:24.332 --rc genhtml_legend=1 00:13:24.332 --rc geninfo_all_blocks=1 00:13:24.332 --rc geninfo_unexecuted_blocks=1 00:13:24.332 00:13:24.332 ' 00:13:24.332 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.332 --rc genhtml_branch_coverage=1 00:13:24.332 --rc genhtml_function_coverage=1 00:13:24.332 --rc genhtml_legend=1 00:13:24.332 --rc geninfo_all_blocks=1 00:13:24.332 --rc geninfo_unexecuted_blocks=1 00:13:24.332 00:13:24.332 ' 00:13:24.332 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.332 --rc genhtml_branch_coverage=1 00:13:24.332 --rc genhtml_function_coverage=1 00:13:24.332 --rc genhtml_legend=1 00:13:24.332 --rc geninfo_all_blocks=1 00:13:24.332 --rc geninfo_unexecuted_blocks=1 00:13:24.332 00:13:24.332 ' 00:13:24.332 06:00:47 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.332 --rc genhtml_branch_coverage=1 00:13:24.332 --rc genhtml_function_coverage=1 00:13:24.332 --rc genhtml_legend=1 00:13:24.332 --rc geninfo_all_blocks=1 00:13:24.332 --rc geninfo_unexecuted_blocks=1 00:13:24.332 00:13:24.332 ' 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:24.332 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:24.333 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81993 00:13:24.333 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:24.333 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:24.333 06:00:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81993 00:13:24.333 06:00:47 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81993 ']' 00:13:24.333 06:00:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.333 06:00:47 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.333 06:00:47 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.333 06:00:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.333 06:00:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 [2024-12-08 06:00:47.287666] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:24.333 [2024-12-08 06:00:47.287942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81993 ] 00:13:24.590 [2024-12-08 06:00:47.451352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.590 [2024-12-08 06:00:47.487100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.524 06:00:48 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.524 06:00:48 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:25.524 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:25.524 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:25.524 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:25.524 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:25.524 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:25.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.783 Waiting for block devices as requested 00:13:25.783 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.041 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.041 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.041 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:31.306 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:31.306 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:31.306 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:31.307 nvme0n1 00:13:31.307 nvme1n1 00:13:31.307 nvme2n1 00:13:31.307 nvme2n2 00:13:31.307 nvme2n3 00:13:31.307 nvme3n1 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:31.307 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 06:00:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e7a1c2ca-29c2-484b-9a55-1c730329df10"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e7a1c2ca-29c2-484b-9a55-1c730329df10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8e3aa138-e815-43c4-be89-6d87935e05b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8e3aa138-e815-43c4-be89-6d87935e05b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "191126d9-6c9f-4375-b011-79bfa4479409"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "191126d9-6c9f-4375-b011-79bfa4479409",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c25474e1-1f9b-4fb2-84fb-c63afe59cb9e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c25474e1-1f9b-4fb2-84fb-c63afe59cb9e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cab0e201-3946-4eda-a7b6-5ce499f03cb0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cab0e201-3946-4eda-a7b6-5ce499f03cb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cfea88ce-b3c5-451f-b15b-bc6c1d9716cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cfea88ce-b3c5-451f-b15b-bc6c1d9716cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:31.566 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81993 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81993 ']' 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81993 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81993 00:13:31.566 killing process with pid 81993 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81993' 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81993 00:13:31.566 06:00:54 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81993 00:13:31.824 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:31.824 06:00:54 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:31.824 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:31.824 06:00:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.824 06:00:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.824 ************************************ 00:13:31.824 START TEST bdev_hello_world 00:13:31.824 ************************************ 00:13:31.824 06:00:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:31.824 [2024-12-08 06:00:54.854929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:31.824 [2024-12-08 06:00:54.855108] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82347 ] 00:13:32.083 [2024-12-08 06:00:55.003789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.083 [2024-12-08 06:00:55.041610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.341 [2024-12-08 06:00:55.206316] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:32.341 [2024-12-08 06:00:55.206380] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:32.341 [2024-12-08 06:00:55.206428] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:32.341 [2024-12-08 06:00:55.208804] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:32.341 [2024-12-08 06:00:55.209141] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:32.341 [2024-12-08 06:00:55.209172] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:32.341 [2024-12-08 06:00:55.209516] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:32.341 00:13:32.341 [2024-12-08 06:00:55.209620] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:32.611 00:13:32.611 real 0m0.632s 00:13:32.611 user 0m0.354s 00:13:32.611 sys 0m0.168s 00:13:32.611 06:00:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.611 ************************************ 00:13:32.611 END TEST bdev_hello_world 00:13:32.611 ************************************ 00:13:32.611 06:00:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:32.611 06:00:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:32.611 06:00:55 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.611 06:00:55 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.611 06:00:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.611 ************************************ 00:13:32.611 START TEST bdev_bounds 00:13:32.611 ************************************ 00:13:32.611 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:32.611 Process bdevio pid: 82374 00:13:32.611 06:00:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82374 00:13:32.611 06:00:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:32.611 06:00:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82374' 00:13:32.611 06:00:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:32.611 06:00:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82374 00:13:32.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.612 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 82374 ']' 00:13:32.612 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.612 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.612 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.612 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.612 06:00:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:32.612 [2024-12-08 06:00:55.526122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:32.612 [2024-12-08 06:00:55.526301] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82374 ] 00:13:32.884 [2024-12-08 06:00:55.668701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.884 [2024-12-08 06:00:55.707855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.884 [2024-12-08 06:00:55.707946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.884 [2024-12-08 06:00:55.708000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.817 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.817 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:33.817 06:00:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:33.817 I/O targets: 00:13:33.817 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:33.817 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:33.817 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:33.817 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:33.817 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:33.817 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:33.817 00:13:33.817 00:13:33.817 CUnit - A unit testing framework for C - Version 2.1-3 00:13:33.817 http://cunit.sourceforge.net/ 00:13:33.817 00:13:33.817 00:13:33.817 Suite: bdevio tests on: nvme3n1 00:13:33.817 Test: blockdev write read block ...passed 00:13:33.817 Test: blockdev write zeroes read block ...passed 00:13:33.817 Test: blockdev write zeroes read no split ...passed 00:13:33.817 Test: blockdev write zeroes read split ...passed 00:13:33.817 Test: blockdev write zeroes read split partial ...passed 00:13:33.817 Test: blockdev reset ...passed 00:13:33.817 Test: blockdev write read 8 blocks ...passed 00:13:33.817 Test: blockdev write read size > 128k ...passed 00:13:33.817 Test: blockdev write read invalid size ...passed 00:13:33.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.817 Test: blockdev write read max offset ...passed 00:13:33.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.817 Test: blockdev writev readv 8 blocks ...passed 00:13:33.817 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.817 Test: blockdev writev readv block ...passed 00:13:33.817 Test: blockdev writev readv size > 128k ...passed 00:13:33.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.817 Test: blockdev comparev and writev ...passed 00:13:33.817 Test: blockdev nvme passthru rw ...passed 00:13:33.817 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.817 Test: blockdev nvme admin passthru ...passed 00:13:33.817 Test: blockdev copy ...passed 00:13:33.817 Suite: bdevio tests on: nvme2n3 00:13:33.817 Test: blockdev write read block ...passed 00:13:33.817 Test: blockdev write zeroes read block ...passed 00:13:33.817 Test: blockdev write zeroes read no split ...passed 00:13:33.817 Test: blockdev write zeroes read split ...passed 00:13:33.817 Test: blockdev write zeroes read split partial ...passed 00:13:33.817 Test: blockdev reset ...passed 00:13:33.817 Test: blockdev write read 8 blocks ...passed 00:13:33.817 Test: blockdev write read size > 128k ...passed 00:13:33.817 Test: blockdev write read invalid size ...passed 00:13:33.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.817 Test: blockdev write read max offset ...passed 00:13:33.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.817 Test: blockdev writev readv 8 blocks ...passed 00:13:33.817 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.817 Test: blockdev writev readv block ...passed 00:13:33.817 Test: blockdev writev readv size > 128k ...passed 00:13:33.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.817 Test: blockdev comparev and writev ...passed 00:13:33.817 Test: blockdev nvme passthru rw ...passed 00:13:33.817 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.817 Test: blockdev nvme admin passthru ...passed 00:13:33.818 Test: blockdev copy ...passed 00:13:33.818 Suite: bdevio tests on: nvme2n2 00:13:33.818 Test: blockdev write read block ...passed 00:13:33.818 Test: blockdev write zeroes read block ...passed 00:13:33.818 Test: blockdev write zeroes read no split ...passed 00:13:33.818 Test: blockdev write zeroes read split ...passed 00:13:33.818 Test: blockdev write zeroes read split partial ...passed 00:13:33.818 Test: blockdev reset ...passed 00:13:33.818 Test: blockdev write read 8 blocks ...passed 00:13:33.818 Test: blockdev write read size > 128k ...passed 00:13:33.818 Test: blockdev write read invalid size ...passed 00:13:33.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.818 Test: blockdev write read max offset ...passed 00:13:33.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.818 Test: blockdev writev readv 8 blocks ...passed 00:13:33.818 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.818 Test: blockdev writev readv block ...passed 00:13:33.818 Test: blockdev writev readv size > 128k ...passed 00:13:33.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.818 Test: blockdev comparev and writev ...passed 00:13:33.818 Test: blockdev nvme passthru rw ...passed 00:13:33.818 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.818 Test: blockdev nvme admin passthru ...passed 00:13:33.818 Test: blockdev copy ...passed 00:13:33.818 Suite: bdevio tests on: nvme2n1 00:13:33.818 Test: blockdev write read block ...passed 00:13:33.818 Test: blockdev write zeroes read block ...passed 00:13:33.818 Test: blockdev write zeroes read no split ...passed 00:13:33.818 Test: blockdev write zeroes read split ...passed 00:13:33.818 Test: blockdev write zeroes read split partial ...passed 00:13:33.818 Test: blockdev reset ...passed 00:13:33.818 Test: blockdev write read 8 blocks ...passed 00:13:33.818 Test: blockdev write read size > 128k ...passed 00:13:33.818 Test: blockdev write read invalid size ...passed 00:13:33.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.818 Test: blockdev write read max offset ...passed 00:13:33.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.818 Test: blockdev writev readv 8 blocks ...passed 00:13:33.818 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.818 Test: blockdev writev readv block ...passed 00:13:33.818 Test: blockdev writev readv size > 128k ...passed 00:13:33.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.818 Test: blockdev comparev and writev ...passed 00:13:33.818 Test: blockdev nvme passthru rw ...passed 00:13:33.818 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.818 Test: blockdev nvme admin passthru ...passed 00:13:33.818 Test: blockdev copy ...passed 00:13:33.818 Suite: bdevio tests on: nvme1n1 00:13:33.818 Test: blockdev write read block ...passed 00:13:33.818 Test: blockdev write zeroes read block ...passed 00:13:33.818 Test: blockdev write zeroes read no split ...passed 00:13:33.818 Test: blockdev write zeroes read split ...passed 00:13:33.818 Test: blockdev write zeroes read split partial ...passed 00:13:33.818 Test: blockdev reset ...passed 00:13:33.818 Test: blockdev write read 8 blocks ...passed 00:13:33.818 Test: blockdev write read size > 128k ...passed 00:13:33.818 Test: blockdev write read invalid size ...passed 00:13:33.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.818 Test: blockdev write read max offset ...passed 00:13:33.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.818 Test: blockdev writev readv 8 blocks ...passed 00:13:33.818 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.818 Test: blockdev writev readv block ...passed 00:13:33.818 Test: blockdev writev readv size > 128k ...passed 00:13:33.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.818 Test: blockdev comparev and writev ...passed 00:13:33.818 Test: blockdev nvme passthru rw ...passed 00:13:33.818 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.818 Test: blockdev nvme admin passthru ...passed 00:13:33.818 Test: blockdev copy ...passed 00:13:33.818 Suite: bdevio tests on: nvme0n1 00:13:33.818 Test: blockdev write read block ...passed 00:13:33.818 Test: blockdev write zeroes read block ...passed 00:13:33.818 Test: blockdev write zeroes read no split ...passed 00:13:33.818 Test: blockdev write zeroes read split ...passed 00:13:33.818 Test: blockdev write zeroes read split partial ...passed 00:13:33.818 Test: blockdev reset ...passed 00:13:33.818 Test: blockdev write read 8 blocks ...passed 00:13:33.818 Test: blockdev write read size > 128k ...passed 00:13:33.818 Test: blockdev write read invalid size ...passed 00:13:33.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.818 Test: blockdev write read max offset ...passed 00:13:33.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.818 Test: blockdev writev readv 8 blocks ...passed 00:13:33.818 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.818 Test: blockdev writev readv block ...passed 00:13:33.818 Test: blockdev writev readv size > 128k ...passed 00:13:33.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.818 Test: blockdev comparev and writev ...passed 00:13:33.818 Test: blockdev nvme passthru rw ...passed 00:13:33.818 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.818 Test: blockdev nvme admin passthru ...passed 00:13:33.818 Test: blockdev copy ...passed 00:13:33.818 00:13:33.818 Run Summary: Type Total Ran Passed Failed Inactive 00:13:33.818 suites 6 6 n/a 0 0 00:13:33.818 tests 138 138 138 0 0 00:13:33.818 asserts 780 780 780 0 n/a 00:13:33.818 00:13:33.818 Elapsed time = 0.325 seconds 00:13:33.818 0 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82374 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 82374 ']' 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 82374 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82374 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82374' 00:13:34.076 killing process with pid 82374 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 82374 00:13:34.076 06:00:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 82374 00:13:34.076 06:00:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:34.076 00:13:34.076 real 0m1.638s 00:13:34.076 user 0m4.394s 00:13:34.076 sys 0m0.315s 00:13:34.076 06:00:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.076 06:00:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:34.076 ************************************ 00:13:34.076 END TEST bdev_bounds 00:13:34.076 ************************************ 00:13:34.334 06:00:57 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:34.334 06:00:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:34.334 06:00:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.334 06:00:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.334 ************************************ 00:13:34.334 START TEST bdev_nbd 00:13:34.334 ************************************ 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82423 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82423 /var/tmp/spdk-nbd.sock 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 82423 ']' 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:34.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.334 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:34.334 [2024-12-08 06:00:57.222589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:34.334 [2024-12-08 06:00:57.222960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.334 [2024-12-08 06:00:57.366689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.592 [2024-12-08 06:00:57.404131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.525 1+0 records in 00:13:35.525 1+0 records out 00:13:35.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429499 s, 9.5 MB/s 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.525 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.783 1+0 records in 00:13:35.783 1+0 records out 00:13:35.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474871 s, 8.6 MB/s 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.783 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.784 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.784 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.784 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:36.042 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:36.042 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.300 1+0 records in 00:13:36.300 1+0 records out 00:13:36.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584009 s, 7.0 MB/s 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:36.300 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.558 1+0 records in 00:13:36.558 1+0 records out 00:13:36.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691763 s, 5.9 MB/s 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:36.558 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.816 1+0 records in 00:13:36.816 1+0 records out 00:13:36.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788465 s, 5.2 MB/s 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:36.816 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:37.073 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.074 1+0 records in 00:13:37.074 1+0 records out 00:13:37.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733859 s, 5.6 MB/s 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:37.074 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd0", 00:13:37.332 "bdev_name": "nvme0n1" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd1", 00:13:37.332 "bdev_name": "nvme1n1" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd2", 00:13:37.332 "bdev_name": "nvme2n1" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd3", 00:13:37.332 "bdev_name": "nvme2n2" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd4", 00:13:37.332 "bdev_name": "nvme2n3" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd5", 00:13:37.332 "bdev_name": "nvme3n1" 00:13:37.332 } 00:13:37.332 ]' 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd0", 00:13:37.332 "bdev_name": "nvme0n1" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd1", 00:13:37.332 "bdev_name": "nvme1n1" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd2", 00:13:37.332 "bdev_name": "nvme2n1" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd3", 00:13:37.332 "bdev_name": "nvme2n2" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd4", 00:13:37.332 "bdev_name": "nvme2n3" 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "nbd_device": "/dev/nbd5", 00:13:37.332 "bdev_name": "nvme3n1" 00:13:37.332 } 00:13:37.332 ]' 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.332 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.898 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.157 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.415 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.674 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.932 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.933 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.933 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.191 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:39.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:39.709 /dev/nbd0 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.709 1+0 records in 00:13:39.709 1+0 records out 00:13:39.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594085 s, 6.9 MB/s 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:39.709 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:39.967 /dev/nbd1 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.967 1+0 records in 00:13:39.967 1+0 records out 00:13:39.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549379 s, 7.5 MB/s 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:39.967 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:40.224 /dev/nbd10 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.224 1+0 records in 00:13:40.224 1+0 records out 00:13:40.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060022 s, 6.8 MB/s 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:40.224 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:40.482 /dev/nbd11 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.482 1+0 records in 00:13:40.482 1+0 records out 00:13:40.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695481 s, 5.9 MB/s 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:40.482 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:40.740 /dev/nbd12 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.740 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.998 1+0 records in 00:13:40.998 1+0 records out 00:13:40.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063757 s, 6.4 MB/s 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:40.998 06:01:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:40.998 /dev/nbd13 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.257 1+0 records in 00:13:41.257 1+0 records out 00:13:41.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081129 s, 5.0 MB/s 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.257 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd0", 00:13:41.515 "bdev_name": "nvme0n1" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd1", 00:13:41.515 "bdev_name": "nvme1n1" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd10", 00:13:41.515 "bdev_name": "nvme2n1" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd11", 00:13:41.515 "bdev_name": "nvme2n2" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd12", 00:13:41.515 "bdev_name": "nvme2n3" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd13", 00:13:41.515 "bdev_name": "nvme3n1" 00:13:41.515 } 00:13:41.515 ]' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd0", 00:13:41.515 "bdev_name": "nvme0n1" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd1", 00:13:41.515 "bdev_name": "nvme1n1" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd10", 00:13:41.515 "bdev_name": "nvme2n1" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd11", 00:13:41.515 "bdev_name": "nvme2n2" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd12", 00:13:41.515 "bdev_name": "nvme2n3" 00:13:41.515 }, 00:13:41.515 { 00:13:41.515 "nbd_device": "/dev/nbd13", 00:13:41.515 "bdev_name": "nvme3n1" 00:13:41.515 } 00:13:41.515 ]' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:41.515 /dev/nbd1 00:13:41.515 /dev/nbd10 00:13:41.515 /dev/nbd11 00:13:41.515 /dev/nbd12 00:13:41.515 /dev/nbd13' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:41.515 /dev/nbd1 00:13:41.515 /dev/nbd10 00:13:41.515 /dev/nbd11 00:13:41.515 /dev/nbd12 00:13:41.515 /dev/nbd13' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:41.515 256+0 records in 00:13:41.515 256+0 records out 00:13:41.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633489 s, 166 MB/s 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.515 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:41.774 256+0 records in 00:13:41.774 256+0 records out 00:13:41.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171969 s, 6.1 MB/s 00:13:41.774 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.774 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:41.774 256+0 records in 00:13:41.774 256+0 records out 00:13:41.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18499 s, 5.7 MB/s 00:13:41.774 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.774 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:42.032 256+0 records in 00:13:42.032 256+0 records out 00:13:42.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168951 s, 6.2 MB/s 00:13:42.032 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.032 06:01:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:42.290 256+0 records in 00:13:42.290 256+0 records out 00:13:42.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162911 s, 6.4 MB/s 00:13:42.290 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.290 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:42.290 256+0 records in 00:13:42.290 256+0 records out 00:13:42.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142055 s, 7.4 MB/s 00:13:42.290 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.290 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:42.548 256+0 records in 00:13:42.548 256+0 records out 00:13:42.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146499 s, 7.2 MB/s 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.548 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.807 06:01:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.373 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.631 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.632 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.891 06:01:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:44.149 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:44.149 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.150 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.407 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:44.665 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:44.924 malloc_lvol_verify 00:13:44.924 06:01:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:45.183 3126af89-9a66-460a-8a6a-e9a7d9c1a7d6 00:13:45.183 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:45.441 e39d976b-9090-4ebd-ba1c-47661441f839 00:13:45.441 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:45.700 /dev/nbd0 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:45.700 mke2fs 1.47.0 (5-Feb-2023) 00:13:45.700 Discarding device blocks: 0/4096 done 00:13:45.700 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:45.700 00:13:45.700 Allocating group tables: 0/1 done 00:13:45.700 Writing inode tables: 0/1 done 00:13:45.700 Creating journal (1024 blocks): done 00:13:45.700 Writing superblocks and filesystem accounting information: 0/1 done 00:13:45.700 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.700 06:01:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82423 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 82423 ']' 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 82423 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82423 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.267 killing process with pid 82423 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82423' 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 82423 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 82423 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:46.267 00:13:46.267 real 0m12.133s 00:13:46.267 user 0m17.778s 00:13:46.267 sys 0m4.046s 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.267 06:01:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:46.268 ************************************ 00:13:46.268 END TEST bdev_nbd 00:13:46.268 ************************************ 00:13:46.268 06:01:09 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:46.268 06:01:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:46.268 06:01:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:46.268 06:01:09 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:46.268 06:01:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:46.268 06:01:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.268 06:01:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.527 ************************************ 00:13:46.527 START TEST bdev_fio 00:13:46.527 ************************************ 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:46.527 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:46.527 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:46.528 ************************************ 00:13:46.528 START TEST bdev_fio_rw_verify 00:13:46.528 ************************************ 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:46.528 06:01:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:46.787 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:46.787 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:46.787 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:46.787 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:46.787 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:46.787 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:46.787 fio-3.35 00:13:46.787 Starting 6 threads 00:13:59.036 00:13:59.036 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82844: Sun Dec 8 06:01:20 2024 00:13:59.036 read: IOPS=29.0k, BW=113MiB/s (119MB/s)(1133MiB/10001msec) 00:13:59.036 slat (usec): min=3, max=1091, avg= 6.82, stdev= 4.34 00:13:59.036 clat (usec): min=88, max=9783, avg=639.75, stdev=219.97 00:13:59.036 lat (usec): min=95, max=9796, avg=646.57, stdev=220.59 00:13:59.036 clat percentiles (usec): 00:13:59.036 | 50.000th=[ 668], 99.000th=[ 1123], 99.900th=[ 1663], 99.990th=[ 3621], 00:13:59.036 | 99.999th=[ 9765] 00:13:59.036 write: IOPS=29.4k, BW=115MiB/s (120MB/s)(1149MiB/10001msec); 0 zone resets 00:13:59.036 slat (usec): min=14, max=1446, avg=26.46, stdev=25.94 00:13:59.036 clat (usec): min=102, max=4864, avg=723.83, stdev=237.77 00:13:59.036 lat (usec): min=120, max=4937, avg=750.29, stdev=239.77 00:13:59.036 clat percentiles (usec): 00:13:59.036 | 50.000th=[ 742], 99.000th=[ 1369], 99.900th=[ 2343], 99.990th=[ 3523], 00:13:59.036 | 99.999th=[ 4752] 00:13:59.036 bw ( KiB/s): min=98311, max=143922, per=100.00%, avg=117835.74, stdev=2081.86, samples=114 00:13:59.036 iops : min=24577, max=35980, avg=29458.74, stdev=520.46, samples=114 00:13:59.036 lat (usec) : 100=0.01%, 250=2.68%, 500=18.75%, 750=39.19%, 1000=33.69% 00:13:59.036 lat (msec) : 2=5.59%, 4=0.11%, 10=0.01% 00:13:59.036 cpu : usr=60.82%, sys=25.88%, ctx=8259, majf=0, minf=26558 00:13:59.036 IO depths : 1=11.9%, 2=24.3%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:59.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.036 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.036 issued rwts: total=290169,294095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:59.036 00:13:59.036 Run status group 0 (all jobs): 00:13:59.036 READ: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=1133MiB (1189MB), run=10001-10001msec 00:13:59.036 WRITE: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1149MiB (1205MB), run=10001-10001msec 00:13:59.036 ----------------------------------------------------- 00:13:59.036 Suppressions used: 00:13:59.036 count bytes template 00:13:59.036 6 48 /usr/src/fio/parse.c 00:13:59.036 3719 357024 /usr/src/fio/iolog.c 00:13:59.036 1 8 libtcmalloc_minimal.so 00:13:59.036 1 904 libcrypto.so 00:13:59.036 ----------------------------------------------------- 00:13:59.036 00:13:59.036 00:13:59.036 real 0m11.189s 00:13:59.036 user 0m37.254s 00:13:59.036 sys 0m15.847s 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:59.036 ************************************ 00:13:59.036 END TEST bdev_fio_rw_verify 00:13:59.036 ************************************ 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:59.036 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e7a1c2ca-29c2-484b-9a55-1c730329df10"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e7a1c2ca-29c2-484b-9a55-1c730329df10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8e3aa138-e815-43c4-be89-6d87935e05b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8e3aa138-e815-43c4-be89-6d87935e05b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "191126d9-6c9f-4375-b011-79bfa4479409"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "191126d9-6c9f-4375-b011-79bfa4479409",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c25474e1-1f9b-4fb2-84fb-c63afe59cb9e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c25474e1-1f9b-4fb2-84fb-c63afe59cb9e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cab0e201-3946-4eda-a7b6-5ce499f03cb0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cab0e201-3946-4eda-a7b6-5ce499f03cb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cfea88ce-b3c5-451f-b15b-bc6c1d9716cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cfea88ce-b3c5-451f-b15b-bc6c1d9716cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:59.037 /home/vagrant/spdk_repo/spdk 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:59.037 00:13:59.037 real 0m11.375s 00:13:59.037 user 0m37.357s 00:13:59.037 sys 0m15.927s 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.037 06:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:59.037 ************************************ 00:13:59.037 END TEST bdev_fio 00:13:59.037 ************************************ 00:13:59.037 06:01:20 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:59.037 06:01:20 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:59.037 06:01:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:59.037 06:01:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.037 06:01:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.037 ************************************ 00:13:59.037 START TEST bdev_verify 00:13:59.037 ************************************ 00:13:59.037 06:01:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:59.037 [2024-12-08 06:01:20.830124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:59.037 [2024-12-08 06:01:20.830401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83008 ] 00:13:59.037 [2024-12-08 06:01:20.973293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.037 [2024-12-08 06:01:21.010877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.037 [2024-12-08 06:01:21.010919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.037 Running I/O for 5 seconds... 00:14:00.546 22688.00 IOPS, 88.62 MiB/s [2024-12-08T06:01:24.530Z] 22688.00 IOPS, 88.62 MiB/s [2024-12-08T06:01:25.908Z] 22240.00 IOPS, 86.88 MiB/s [2024-12-08T06:01:26.474Z] 22328.50 IOPS, 87.22 MiB/s [2024-12-08T06:01:26.474Z] 21914.60 IOPS, 85.60 MiB/s 00:14:03.429 Latency(us) 00:14:03.429 [2024-12-08T06:01:26.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.429 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x0 length 0xa0000 00:14:03.429 nvme0n1 : 5.02 1607.89 6.28 0.00 0.00 79471.79 10426.18 74353.57 00:14:03.429 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0xa0000 length 0xa0000 00:14:03.429 nvme0n1 : 5.05 1597.94 6.24 0.00 0.00 79945.66 11915.64 68634.07 00:14:03.429 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x0 length 0xbd0bd 00:14:03.429 nvme1n1 : 5.05 2776.03 10.84 0.00 0.00 45833.26 5034.36 74830.20 00:14:03.429 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:03.429 nvme1n1 : 5.06 2746.19 10.73 0.00 0.00 46357.79 5779.08 76260.07 00:14:03.429 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x0 length 0x80000 00:14:03.429 nvme2n1 : 5.06 1644.58 6.42 0.00 0.00 77377.34 6196.13 87699.08 00:14:03.429 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x80000 length 0x80000 00:14:03.429 nvme2n1 : 5.06 1618.99 6.32 0.00 0.00 78549.57 9830.40 79119.83 00:14:03.429 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x0 length 0x80000 00:14:03.429 nvme2n2 : 5.06 1618.02 6.32 0.00 0.00 78525.49 7983.48 81979.58 00:14:03.429 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x80000 length 0x80000 00:14:03.429 nvme2n2 : 5.05 1597.09 6.24 0.00 0.00 79476.25 15192.44 70063.94 00:14:03.429 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x0 length 0x80000 00:14:03.429 nvme2n3 : 5.06 1617.60 6.32 0.00 0.00 78425.20 7298.33 79119.83 00:14:03.429 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x80000 length 0x80000 00:14:03.429 nvme2n3 : 5.07 1616.45 6.31 0.00 0.00 78385.03 5898.24 73400.32 00:14:03.429 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x0 length 0x20000 00:14:03.429 nvme3n1 : 5.06 1618.52 6.32 0.00 0.00 78249.07 6464.23 91988.71 00:14:03.429 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.429 Verification LBA range: start 0x20000 length 0x20000 00:14:03.429 nvme3n1 : 5.07 1615.24 6.31 0.00 0.00 78328.22 5570.56 80073.08 00:14:03.429 [2024-12-08T06:01:26.474Z] =================================================================================================================== 00:14:03.429 [2024-12-08T06:01:26.474Z] Total : 21674.55 84.67 0.00 0.00 70370.39 5034.36 91988.71 00:14:03.687 00:14:03.687 real 0m5.783s 00:14:03.687 user 0m9.004s 00:14:03.687 sys 0m1.650s 00:14:03.687 06:01:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.687 06:01:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:03.687 ************************************ 00:14:03.687 END TEST bdev_verify 00:14:03.687 ************************************ 00:14:03.687 06:01:26 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:03.687 06:01:26 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:03.688 06:01:26 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.688 06:01:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.688 ************************************ 00:14:03.688 START TEST bdev_verify_big_io 00:14:03.688 ************************************ 00:14:03.688 06:01:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:03.688 [2024-12-08 06:01:26.679313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:03.688 [2024-12-08 06:01:26.679492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83097 ] 00:14:03.945 [2024-12-08 06:01:26.826717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.945 [2024-12-08 06:01:26.862717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.946 [2024-12-08 06:01:26.862738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.204 Running I/O for 5 seconds... 00:14:10.774 976.00 IOPS, 61.00 MiB/s [2024-12-08T06:01:33.819Z] 2608.00 IOPS, 163.00 MiB/s 00:14:10.774 Latency(us) 00:14:10.774 [2024-12-08T06:01:33.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.774 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x0 length 0xa000 00:14:10.774 nvme0n1 : 6.05 129.68 8.11 0.00 0.00 948866.38 7596.22 1067641.02 00:14:10.774 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0xa000 length 0xa000 00:14:10.774 nvme0n1 : 6.05 126.86 7.93 0.00 0.00 966315.91 123922.62 1349803.29 00:14:10.774 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x0 length 0xbd0b 00:14:10.774 nvme1n1 : 6.05 142.80 8.93 0.00 0.00 836569.52 99614.72 1410811.35 00:14:10.774 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:10.774 nvme1n1 : 6.06 147.91 9.24 0.00 0.00 802250.81 83886.08 1121023.07 00:14:10.774 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x0 length 0x8000 00:14:10.774 nvme2n1 : 6.05 130.83 8.18 0.00 0.00 884701.10 125829.12 1174405.12 00:14:10.774 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x8000 length 0x8000 00:14:10.774 nvme2n1 : 6.06 81.84 5.11 0.00 0.00 1406483.28 280255.77 2745362.62 00:14:10.774 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x0 length 0x8000 00:14:10.774 nvme2n2 : 6.08 131.64 8.23 0.00 0.00 871489.54 17158.52 1418437.35 00:14:10.774 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x8000 length 0x8000 00:14:10.774 nvme2n2 : 6.06 130.59 8.16 0.00 0.00 875583.53 22401.40 1982761.89 00:14:10.774 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x0 length 0x8000 00:14:10.774 nvme2n3 : 6.08 123.68 7.73 0.00 0.00 896975.83 15371.17 2013265.92 00:14:10.774 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x8000 length 0x8000 00:14:10.774 nvme2n3 : 6.08 126.39 7.90 0.00 0.00 875110.63 9115.46 1509949.44 00:14:10.774 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x0 length 0x2000 00:14:10.774 nvme3n1 : 6.07 89.60 5.60 0.00 0.00 1197790.62 2353.34 3340191.19 00:14:10.774 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:10.774 Verification LBA range: start 0x2000 length 0x2000 00:14:10.774 nvme3n1 : 6.07 126.50 7.91 0.00 0.00 847103.52 4170.47 1418437.35 00:14:10.774 [2024-12-08T06:01:33.819Z] =================================================================================================================== 00:14:10.774 [2024-12-08T06:01:33.819Z] Total : 1488.31 93.02 0.00 0.00 927015.94 2353.34 3340191.19 00:14:10.774 00:14:10.774 real 0m6.808s 00:14:10.774 user 0m12.467s 00:14:10.774 sys 0m0.477s 00:14:10.774 06:01:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.774 06:01:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.774 ************************************ 00:14:10.774 END TEST bdev_verify_big_io 00:14:10.774 ************************************ 00:14:10.774 06:01:33 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:10.774 06:01:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:10.774 06:01:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.774 06:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.774 ************************************ 00:14:10.774 START TEST bdev_write_zeroes 00:14:10.774 ************************************ 00:14:10.774 06:01:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:10.774 [2024-12-08 06:01:33.522268] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:10.774 [2024-12-08 06:01:33.522461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83196 ] 00:14:10.774 [2024-12-08 06:01:33.663403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.774 [2024-12-08 06:01:33.700570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.032 Running I/O for 1 seconds... 00:14:11.986 63072.00 IOPS, 246.38 MiB/s 00:14:11.986 Latency(us) 00:14:11.986 [2024-12-08T06:01:35.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.986 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:11.986 nvme0n1 : 1.03 9358.90 36.56 0.00 0.00 13662.59 8579.26 28478.37 00:14:11.986 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:11.986 nvme1n1 : 1.04 15393.91 60.13 0.00 0.00 8296.62 5034.36 22282.24 00:14:11.986 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:11.986 nvme2n1 : 1.03 9309.39 36.36 0.00 0.00 13662.37 5093.93 28478.37 00:14:11.986 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:11.986 nvme2n2 : 1.03 9295.57 36.31 0.00 0.00 13674.52 4885.41 29431.62 00:14:11.986 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:11.986 nvme2n3 : 1.03 9282.14 36.26 0.00 0.00 13682.19 4855.62 29789.09 00:14:11.987 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:11.987 nvme3n1 : 1.04 9268.47 36.20 0.00 0.00 13692.96 4885.41 30265.72 00:14:11.987 [2024-12-08T06:01:35.032Z] =================================================================================================================== 00:14:11.987 [2024-12-08T06:01:35.032Z] Total : 61908.38 241.83 0.00 0.00 12332.37 4855.62 30265.72 00:14:12.270 00:14:12.270 real 0m1.707s 00:14:12.270 user 0m1.000s 00:14:12.270 sys 0m0.525s 00:14:12.270 06:01:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.270 06:01:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 ************************************ 00:14:12.270 END TEST bdev_write_zeroes 00:14:12.270 ************************************ 00:14:12.270 06:01:35 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:12.270 06:01:35 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:12.270 06:01:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.270 06:01:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 ************************************ 00:14:12.270 START TEST bdev_json_nonenclosed 00:14:12.270 ************************************ 00:14:12.270 06:01:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:12.270 [2024-12-08 06:01:35.301836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:12.270 [2024-12-08 06:01:35.302018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83234 ] 00:14:12.538 [2024-12-08 06:01:35.451628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.538 [2024-12-08 06:01:35.489844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.538 [2024-12-08 06:01:35.489985] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:12.538 [2024-12-08 06:01:35.490021] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:12.538 [2024-12-08 06:01:35.490047] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.796 00:14:12.796 real 0m0.392s 00:14:12.796 user 0m0.178s 00:14:12.796 sys 0m0.110s 00:14:12.796 06:01:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.796 06:01:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:12.796 ************************************ 00:14:12.796 END TEST bdev_json_nonenclosed 00:14:12.796 ************************************ 00:14:12.796 06:01:35 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:12.796 06:01:35 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:12.796 06:01:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.796 06:01:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.796 ************************************ 00:14:12.796 START TEST bdev_json_nonarray 00:14:12.796 ************************************ 00:14:12.796 06:01:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:12.796 [2024-12-08 06:01:35.743105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:12.796 [2024-12-08 06:01:35.743328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83260 ] 00:14:13.054 [2024-12-08 06:01:35.891175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.054 [2024-12-08 06:01:35.925456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.054 [2024-12-08 06:01:35.925583] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:13.054 [2024-12-08 06:01:35.925616] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:13.054 [2024-12-08 06:01:35.925631] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:13.054 00:14:13.054 real 0m0.398s 00:14:13.054 user 0m0.185s 00:14:13.054 sys 0m0.108s 00:14:13.054 06:01:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.054 06:01:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:13.054 ************************************ 00:14:13.054 END TEST bdev_json_nonarray 00:14:13.054 ************************************ 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:13.054 06:01:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:13.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:16.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:16.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:16.154 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:16.154 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:16.154 00:14:16.154 real 0m51.871s 00:14:16.154 user 1m31.719s 00:14:16.154 sys 0m29.078s 00:14:16.154 06:01:38 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.154 06:01:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:16.154 ************************************ 00:14:16.154 END TEST blockdev_xnvme 00:14:16.154 ************************************ 00:14:16.154 06:01:38 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:16.154 06:01:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:16.154 06:01:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.154 06:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:16.154 ************************************ 00:14:16.154 START TEST ublk 00:14:16.154 ************************************ 00:14:16.154 06:01:38 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:16.154 * Looking for test storage... 00:14:16.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:16.154 06:01:38 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:16.154 06:01:38 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:16.154 06:01:38 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.155 06:01:39 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.155 06:01:39 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.155 06:01:39 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.155 06:01:39 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.155 06:01:39 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.155 06:01:39 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:16.155 06:01:39 ublk -- scripts/common.sh@345 -- # : 1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.155 06:01:39 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.155 06:01:39 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@353 -- # local d=1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.155 06:01:39 ublk -- scripts/common.sh@355 -- # echo 1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.155 06:01:39 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@353 -- # local d=2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.155 06:01:39 ublk -- scripts/common.sh@355 -- # echo 2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.155 06:01:39 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.155 06:01:39 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.155 06:01:39 ublk -- scripts/common.sh@368 -- # return 0 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:16.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.155 --rc genhtml_branch_coverage=1 00:14:16.155 --rc genhtml_function_coverage=1 00:14:16.155 --rc genhtml_legend=1 00:14:16.155 --rc geninfo_all_blocks=1 00:14:16.155 --rc geninfo_unexecuted_blocks=1 00:14:16.155 00:14:16.155 ' 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:16.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.155 --rc genhtml_branch_coverage=1 00:14:16.155 --rc genhtml_function_coverage=1 00:14:16.155 --rc genhtml_legend=1 00:14:16.155 --rc geninfo_all_blocks=1 00:14:16.155 --rc geninfo_unexecuted_blocks=1 00:14:16.155 00:14:16.155 ' 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:16.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.155 --rc genhtml_branch_coverage=1 00:14:16.155 --rc genhtml_function_coverage=1 00:14:16.155 --rc genhtml_legend=1 00:14:16.155 --rc geninfo_all_blocks=1 00:14:16.155 --rc geninfo_unexecuted_blocks=1 00:14:16.155 00:14:16.155 ' 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:16.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.155 --rc genhtml_branch_coverage=1 00:14:16.155 --rc genhtml_function_coverage=1 00:14:16.155 --rc genhtml_legend=1 00:14:16.155 --rc geninfo_all_blocks=1 00:14:16.155 --rc geninfo_unexecuted_blocks=1 00:14:16.155 00:14:16.155 ' 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:16.155 06:01:39 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:16.155 06:01:39 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:16.155 06:01:39 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:16.155 06:01:39 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:16.155 06:01:39 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:16.155 06:01:39 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:16.155 06:01:39 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:16.155 06:01:39 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:16.155 06:01:39 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.155 06:01:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 ************************************ 00:14:16.155 START TEST test_save_ublk_config 00:14:16.155 ************************************ 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83540 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83540 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83540 ']' 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.155 06:01:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:16.415 [2024-12-08 06:01:39.197796] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:16.415 [2024-12-08 06:01:39.197996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83540 ] 00:14:16.415 [2024-12-08 06:01:39.338710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.415 [2024-12-08 06:01:39.398698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:17.354 [2024-12-08 06:01:40.190310] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:17.354 [2024-12-08 06:01:40.190713] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:17.354 malloc0 00:14:17.354 [2024-12-08 06:01:40.214384] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:17.354 [2024-12-08 06:01:40.214480] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:17.354 [2024-12-08 06:01:40.214495] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:17.354 [2024-12-08 06:01:40.214507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:17.354 [2024-12-08 06:01:40.223326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:17.354 [2024-12-08 06:01:40.223363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:17.354 [2024-12-08 06:01:40.230224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:17.354 [2024-12-08 06:01:40.230356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:17.354 [2024-12-08 06:01:40.246268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:17.354 0 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.354 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:17.614 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.614 06:01:40 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:17.614 "subsystems": [ 00:14:17.614 { 00:14:17.614 "subsystem": "fsdev", 00:14:17.614 "config": [ 00:14:17.614 { 00:14:17.614 "method": "fsdev_set_opts", 00:14:17.614 "params": { 00:14:17.614 "fsdev_io_pool_size": 65535, 00:14:17.614 "fsdev_io_cache_size": 256 00:14:17.614 } 00:14:17.614 } 00:14:17.614 ] 00:14:17.614 }, 00:14:17.614 { 00:14:17.614 "subsystem": "keyring", 00:14:17.614 "config": [] 00:14:17.614 }, 00:14:17.614 { 00:14:17.614 "subsystem": "iobuf", 00:14:17.614 "config": [ 00:14:17.614 { 00:14:17.614 "method": "iobuf_set_options", 00:14:17.614 "params": { 00:14:17.614 "small_pool_count": 8192, 00:14:17.614 "large_pool_count": 1024, 00:14:17.614 "small_bufsize": 8192, 00:14:17.614 "large_bufsize": 135168 00:14:17.614 } 00:14:17.614 } 00:14:17.614 ] 00:14:17.614 }, 00:14:17.614 { 00:14:17.614 "subsystem": "sock", 00:14:17.614 "config": [ 00:14:17.614 { 00:14:17.614 "method": "sock_set_default_impl", 00:14:17.614 "params": { 00:14:17.614 "impl_name": "posix" 00:14:17.614 } 00:14:17.614 }, 00:14:17.614 { 00:14:17.614 "method": "sock_impl_set_options", 00:14:17.614 "params": { 00:14:17.614 "impl_name": "ssl", 00:14:17.614 "recv_buf_size": 4096, 00:14:17.614 "send_buf_size": 4096, 00:14:17.614 "enable_recv_pipe": true, 00:14:17.614 "enable_quickack": false, 00:14:17.614 "enable_placement_id": 0, 00:14:17.614 "enable_zerocopy_send_server": true, 00:14:17.614 "enable_zerocopy_send_client": false, 00:14:17.615 "zerocopy_threshold": 0, 00:14:17.615 "tls_version": 0, 00:14:17.615 "enable_ktls": false 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "sock_impl_set_options", 00:14:17.615 "params": { 00:14:17.615 "impl_name": "posix", 00:14:17.615 "recv_buf_size": 2097152, 00:14:17.615 "send_buf_size": 2097152, 00:14:17.615 "enable_recv_pipe": true, 00:14:17.615 "enable_quickack": false, 00:14:17.615 "enable_placement_id": 0, 00:14:17.615 "enable_zerocopy_send_server": true, 00:14:17.615 "enable_zerocopy_send_client": false, 00:14:17.615 "zerocopy_threshold": 0, 00:14:17.615 "tls_version": 0, 00:14:17.615 "enable_ktls": false 00:14:17.615 } 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "vmd", 00:14:17.615 "config": [] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "accel", 00:14:17.615 "config": [ 00:14:17.615 { 00:14:17.615 "method": "accel_set_options", 00:14:17.615 "params": { 00:14:17.615 "small_cache_size": 128, 00:14:17.615 "large_cache_size": 16, 00:14:17.615 "task_count": 2048, 00:14:17.615 "sequence_count": 2048, 00:14:17.615 "buf_count": 2048 00:14:17.615 } 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "bdev", 00:14:17.615 "config": [ 00:14:17.615 { 00:14:17.615 "method": "bdev_set_options", 00:14:17.615 "params": { 00:14:17.615 "bdev_io_pool_size": 65535, 00:14:17.615 "bdev_io_cache_size": 256, 00:14:17.615 "bdev_auto_examine": true, 00:14:17.615 "iobuf_small_cache_size": 128, 00:14:17.615 "iobuf_large_cache_size": 16 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "bdev_raid_set_options", 00:14:17.615 "params": { 00:14:17.615 "process_window_size_kb": 1024, 00:14:17.615 "process_max_bandwidth_mb_sec": 0 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "bdev_iscsi_set_options", 00:14:17.615 "params": { 00:14:17.615 "timeout_sec": 30 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "bdev_nvme_set_options", 00:14:17.615 "params": { 00:14:17.615 "action_on_timeout": "none", 00:14:17.615 "timeout_us": 0, 00:14:17.615 "timeout_admin_us": 0, 00:14:17.615 "keep_alive_timeout_ms": 10000, 00:14:17.615 "arbitration_burst": 0, 00:14:17.615 "low_priority_weight": 0, 00:14:17.615 "medium_priority_weight": 0, 00:14:17.615 "high_priority_weight": 0, 00:14:17.615 "nvme_adminq_poll_period_us": 10000, 00:14:17.615 "nvme_ioq_poll_period_us": 0, 00:14:17.615 "io_queue_requests": 0, 00:14:17.615 "delay_cmd_submit": true, 00:14:17.615 "transport_retry_count": 4, 00:14:17.615 "bdev_retry_count": 3, 00:14:17.615 "transport_ack_timeout": 0, 00:14:17.615 "ctrlr_loss_timeout_sec": 0, 00:14:17.615 "reconnect_delay_sec": 0, 00:14:17.615 "fast_io_fail_timeout_sec": 0, 00:14:17.615 "disable_auto_failback": false, 00:14:17.615 "generate_uuids": false, 00:14:17.615 "transport_tos": 0, 00:14:17.615 "nvme_error_stat": false, 00:14:17.615 "rdma_srq_size": 0, 00:14:17.615 "io_path_stat": false, 00:14:17.615 "allow_accel_sequence": false, 00:14:17.615 "rdma_max_cq_size": 0, 00:14:17.615 "rdma_cm_event_timeout_ms": 0, 00:14:17.615 "dhchap_digests": [ 00:14:17.615 "sha256", 00:14:17.615 "sha384", 00:14:17.615 "sha512" 00:14:17.615 ], 00:14:17.615 "dhchap_dhgroups": [ 00:14:17.615 "null", 00:14:17.615 "ffdhe2048", 00:14:17.615 "ffdhe3072", 00:14:17.615 "ffdhe4096", 00:14:17.615 "ffdhe6144", 00:14:17.615 "ffdhe8192" 00:14:17.615 ] 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "bdev_nvme_set_hotplug", 00:14:17.615 "params": { 00:14:17.615 "period_us": 100000, 00:14:17.615 "enable": false 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "bdev_malloc_create", 00:14:17.615 "params": { 00:14:17.615 "name": "malloc0", 00:14:17.615 "num_blocks": 8192, 00:14:17.615 "block_size": 4096, 00:14:17.615 "physical_block_size": 4096, 00:14:17.615 "uuid": "c87df55b-be8e-470a-83b1-6d8312918b0d", 00:14:17.615 "optimal_io_boundary": 0, 00:14:17.615 "md_size": 0, 00:14:17.615 "dif_type": 0, 00:14:17.615 "dif_is_head_of_md": false, 00:14:17.615 "dif_pi_format": 0 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "bdev_wait_for_examine" 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "scsi", 00:14:17.615 "config": null 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "scheduler", 00:14:17.615 "config": [ 00:14:17.615 { 00:14:17.615 "method": "framework_set_scheduler", 00:14:17.615 "params": { 00:14:17.615 "name": "static" 00:14:17.615 } 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "vhost_scsi", 00:14:17.615 "config": [] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "vhost_blk", 00:14:17.615 "config": [] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "ublk", 00:14:17.615 "config": [ 00:14:17.615 { 00:14:17.615 "method": "ublk_create_target", 00:14:17.615 "params": { 00:14:17.615 "cpumask": "1" 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "ublk_start_disk", 00:14:17.615 "params": { 00:14:17.615 "bdev_name": "malloc0", 00:14:17.615 "ublk_id": 0, 00:14:17.615 "num_queues": 1, 00:14:17.615 "queue_depth": 128 00:14:17.615 } 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "nbd", 00:14:17.615 "config": [] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "nvmf", 00:14:17.615 "config": [ 00:14:17.615 { 00:14:17.615 "method": "nvmf_set_config", 00:14:17.615 "params": { 00:14:17.615 "discovery_filter": "match_any", 00:14:17.615 "admin_cmd_passthru": { 00:14:17.615 "identify_ctrlr": false 00:14:17.615 }, 00:14:17.615 "dhchap_digests": [ 00:14:17.615 "sha256", 00:14:17.615 "sha384", 00:14:17.615 "sha512" 00:14:17.615 ], 00:14:17.615 "dhchap_dhgroups": [ 00:14:17.615 "null", 00:14:17.615 "ffdhe2048", 00:14:17.615 "ffdhe3072", 00:14:17.615 "ffdhe4096", 00:14:17.615 "ffdhe6144", 00:14:17.615 "ffdhe8192" 00:14:17.615 ] 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "nvmf_set_max_subsystems", 00:14:17.615 "params": { 00:14:17.615 "max_subsystems": 1024 00:14:17.615 } 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "method": "nvmf_set_crdt", 00:14:17.615 "params": { 00:14:17.615 "crdt1": 0, 00:14:17.615 "crdt2": 0, 00:14:17.615 "crdt3": 0 00:14:17.615 } 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }, 00:14:17.615 { 00:14:17.615 "subsystem": "iscsi", 00:14:17.615 "config": [ 00:14:17.615 { 00:14:17.615 "method": "iscsi_set_options", 00:14:17.615 "params": { 00:14:17.615 "node_base": "iqn.2016-06.io.spdk", 00:14:17.615 "max_sessions": 128, 00:14:17.615 "max_connections_per_session": 2, 00:14:17.615 "max_queue_depth": 64, 00:14:17.615 "default_time2wait": 2, 00:14:17.615 "default_time2retain": 20, 00:14:17.615 "first_burst_length": 8192, 00:14:17.615 "immediate_data": true, 00:14:17.615 "allow_duplicated_isid": false, 00:14:17.615 "error_recovery_level": 0, 00:14:17.615 "nop_timeout": 60, 00:14:17.615 "nop_in_interval": 30, 00:14:17.615 "disable_chap": false, 00:14:17.615 "require_chap": false, 00:14:17.615 "mutual_chap": false, 00:14:17.615 "chap_group": 0, 00:14:17.615 "max_large_datain_per_connection": 64, 00:14:17.615 "max_r2t_per_connection": 4, 00:14:17.615 "pdu_pool_size": 36864, 00:14:17.615 "immediate_data_pool_size": 16384, 00:14:17.615 "data_out_pool_size": 2048 00:14:17.615 } 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 } 00:14:17.615 ] 00:14:17.615 }' 00:14:17.615 06:01:40 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83540 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83540 ']' 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83540 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83540 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.616 killing process with pid 83540 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83540' 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83540 00:14:17.616 06:01:40 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83540 00:14:17.876 [2024-12-08 06:01:40.727797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:17.876 [2024-12-08 06:01:40.762276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:17.876 [2024-12-08 06:01:40.762457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:17.876 [2024-12-08 06:01:40.771210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:17.876 [2024-12-08 06:01:40.771307] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:17.876 [2024-12-08 06:01:40.771323] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:17.876 [2024-12-08 06:01:40.771361] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:17.876 [2024-12-08 06:01:40.771537] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83578 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83578 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83578 ']' 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:18.135 06:01:41 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:18.135 "subsystems": [ 00:14:18.135 { 00:14:18.135 "subsystem": "fsdev", 00:14:18.135 "config": [ 00:14:18.135 { 00:14:18.135 "method": "fsdev_set_opts", 00:14:18.135 "params": { 00:14:18.135 "fsdev_io_pool_size": 65535, 00:14:18.135 "fsdev_io_cache_size": 256 00:14:18.135 } 00:14:18.135 } 00:14:18.135 ] 00:14:18.135 }, 00:14:18.135 { 00:14:18.135 "subsystem": "keyring", 00:14:18.135 "config": [] 00:14:18.135 }, 00:14:18.135 { 00:14:18.135 "subsystem": "iobuf", 00:14:18.135 "config": [ 00:14:18.135 { 00:14:18.135 "method": "iobuf_set_options", 00:14:18.135 "params": { 00:14:18.135 "small_pool_count": 8192, 00:14:18.135 "large_pool_count": 1024, 00:14:18.135 "small_bufsize": 8192, 00:14:18.135 "large_bufsize": 135168 00:14:18.135 } 00:14:18.135 } 00:14:18.135 ] 00:14:18.135 }, 00:14:18.135 { 00:14:18.135 "subsystem": "sock", 00:14:18.135 "config": [ 00:14:18.135 { 00:14:18.135 "method": "sock_set_default_impl", 00:14:18.135 "params": { 00:14:18.135 "impl_name": "posix" 00:14:18.135 } 00:14:18.135 }, 00:14:18.135 { 00:14:18.135 "method": "sock_impl_set_options", 00:14:18.135 "params": { 00:14:18.135 "impl_name": "ssl", 00:14:18.135 "recv_buf_size": 4096, 00:14:18.135 "send_buf_size": 4096, 00:14:18.135 "enable_recv_pipe": true, 00:14:18.135 "enable_quickack": false, 00:14:18.135 "enable_placement_id": 0, 00:14:18.135 "enable_zerocopy_send_server": true, 00:14:18.135 "enable_zerocopy_send_client": false, 00:14:18.135 "zerocopy_threshold": 0, 00:14:18.135 "tls_version": 0, 00:14:18.135 "enable_ktls": false 00:14:18.135 } 00:14:18.135 }, 00:14:18.135 { 00:14:18.135 "method": "sock_impl_set_options", 00:14:18.135 "params": { 00:14:18.135 "impl_name": "posix", 00:14:18.136 "recv_buf_size": 2097152, 00:14:18.136 "send_buf_size": 2097152, 00:14:18.136 "enable_recv_pipe": true, 00:14:18.136 "enable_quickack": false, 00:14:18.136 "enable_placement_id": 0, 00:14:18.136 "enable_zerocopy_send_server": true, 00:14:18.136 "enable_zerocopy_send_client": false, 00:14:18.136 "zerocopy_threshold": 0, 00:14:18.136 "tls_version": 0, 00:14:18.136 "enable_ktls": false 00:14:18.136 } 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "vmd", 00:14:18.136 "config": [] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "accel", 00:14:18.136 "config": [ 00:14:18.136 { 00:14:18.136 "method": "accel_set_options", 00:14:18.136 "params": { 00:14:18.136 "small_cache_size": 128, 00:14:18.136 "large_cache_size": 16, 00:14:18.136 "task_count": 2048, 00:14:18.136 "sequence_count": 2048, 00:14:18.136 "buf_count": 2048 00:14:18.136 } 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "bdev", 00:14:18.136 "config": [ 00:14:18.136 { 00:14:18.136 "method": "bdev_set_options", 00:14:18.136 "params": { 00:14:18.136 "bdev_io_pool_size": 65535, 00:14:18.136 "bdev_io_cache_size": 256, 00:14:18.136 "bdev_auto_examine": true, 00:14:18.136 "iobuf_small_cache_size": 128, 00:14:18.136 "iobuf_large_cache_size": 16 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "bdev_raid_set_options", 00:14:18.136 "params": { 00:14:18.136 "process_window_size_kb": 1024, 00:14:18.136 "process_max_bandwidth_mb_sec": 0 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "bdev_iscsi_set_options", 00:14:18.136 "params": { 00:14:18.136 "timeout_sec": 30 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "bdev_nvme_set_options", 00:14:18.136 "params": { 00:14:18.136 "action_on_timeout": "none", 00:14:18.136 "timeout_us": 0, 00:14:18.136 "timeout_admin_us": 0, 00:14:18.136 "keep_alive_timeout_ms": 10000, 00:14:18.136 "arbitration_burst": 0, 00:14:18.136 "low_priority_weight": 0, 00:14:18.136 "medium_priority_weight": 0, 00:14:18.136 "high_priority_weight": 0, 00:14:18.136 "nvme_adminq_poll_period_us": 10000, 00:14:18.136 "nvme_ioq_poll_period_us": 0, 00:14:18.136 "io_queue_requests": 0, 00:14:18.136 "delay_cmd_submit": true, 00:14:18.136 "transport_retry_count": 4, 00:14:18.136 "bdev_retry_count": 3, 00:14:18.136 "transport_ack_timeout": 0, 00:14:18.136 "ctrlr_loss_timeout_sec": 0, 00:14:18.136 "reconnect_delay_sec": 0, 00:14:18.136 "fast_io_fail_timeout_sec": 0, 00:14:18.136 "disable_auto_failback": false, 00:14:18.136 "generate_uuids": false, 00:14:18.136 "transport_tos": 0, 00:14:18.136 "nvme_error_stat": false, 00:14:18.136 "rdma_srq_size": 0, 00:14:18.136 "io_path_stat": false, 00:14:18.136 "allow_accel_sequence": false, 00:14:18.136 "rdma_max_cq_size": 0, 00:14:18.136 "rdma_cm_event_timeout_ms": 0, 00:14:18.136 "dhchap_digests": [ 00:14:18.136 "sha256", 00:14:18.136 "sha384", 00:14:18.136 "sha512" 00:14:18.136 ], 00:14:18.136 "dhchap_dhgroups": [ 00:14:18.136 "null", 00:14:18.136 "ffdhe2048", 00:14:18.136 "ffdhe3072", 00:14:18.136 "ffdhe4096", 00:14:18.136 "ffdhe6144", 00:14:18.136 "ffdhe8192" 00:14:18.136 ] 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "bdev_nvme_set_hotplug", 00:14:18.136 "params": { 00:14:18.136 "period_us": 100000, 00:14:18.136 "enable": false 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "bdev_malloc_create", 00:14:18.136 "params": { 00:14:18.136 "name": "malloc0", 00:14:18.136 "num_blocks": 8192, 00:14:18.136 "block_size": 4096, 00:14:18.136 "physical_block_size": 4096, 00:14:18.136 "uuid": "c87df55b-be8e-470a-83b1-6d8312918b0d", 00:14:18.136 "optimal_io_boundary": 0, 00:14:18.136 "md_size": 0, 00:14:18.136 "dif_type": 0, 00:14:18.136 "dif_is_head_of_md": false, 00:14:18.136 "dif_pi_format": 0 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "bdev_wait_for_examine" 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "scsi", 00:14:18.136 "config": null 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "scheduler", 00:14:18.136 "config": [ 00:14:18.136 { 00:14:18.136 "method": "framework_set_scheduler", 00:14:18.136 "params": { 00:14:18.136 "name": "static" 00:14:18.136 } 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "vhost_scsi", 00:14:18.136 "config": [] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "vhost_blk", 00:14:18.136 "config": [] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "ublk", 00:14:18.136 "config": [ 00:14:18.136 { 00:14:18.136 "method": "ublk_create_target", 00:14:18.136 "params": { 00:14:18.136 "cpumask": "1" 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "ublk_start_disk", 00:14:18.136 "params": { 00:14:18.136 "bdev_name": "malloc0", 00:14:18.136 "ublk_id": 0, 00:14:18.136 "num_queues": 1, 00:14:18.136 "queue_depth": 128 00:14:18.136 } 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "nbd", 00:14:18.136 "config": [] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "nvmf", 00:14:18.136 "config": [ 00:14:18.136 { 00:14:18.136 "method": "nvmf_set_config", 00:14:18.136 "params": { 00:14:18.136 "discovery_filter": "match_any", 00:14:18.136 "admin_cmd_passthru": { 00:14:18.136 "identify_ctrlr": false 00:14:18.136 }, 00:14:18.136 "dhchap_digests": [ 00:14:18.136 "sha256", 00:14:18.136 "sha384", 00:14:18.136 "sha512" 00:14:18.136 ], 00:14:18.136 "dhchap_dhgroups": [ 00:14:18.136 "null", 00:14:18.136 "ffdhe2048", 00:14:18.136 "ffdhe3072", 00:14:18.136 "ffdhe4096", 00:14:18.136 "ffdhe6144", 00:14:18.136 "ffdhe8192" 00:14:18.136 ] 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "nvmf_set_max_subsystems", 00:14:18.136 "params": { 00:14:18.136 "max_subsystems": 1024 00:14:18.136 } 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "method": "nvmf_set_crdt", 00:14:18.136 "params": { 00:14:18.136 "crdt1": 0, 00:14:18.136 "crdt2": 0, 00:14:18.136 "crdt3": 0 00:14:18.136 } 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }, 00:14:18.136 { 00:14:18.136 "subsystem": "iscsi", 00:14:18.136 "config": [ 00:14:18.136 { 00:14:18.136 "method": "iscsi_set_options", 00:14:18.136 "params": { 00:14:18.136 "node_base": "iqn.2016-06.io.spdk", 00:14:18.136 "max_sessions": 128, 00:14:18.136 "max_connections_per_session": 2, 00:14:18.136 "max_queue_depth": 64, 00:14:18.136 "default_time2wait": 2, 00:14:18.136 "default_time2retain": 20, 00:14:18.136 "first_burst_length": 8192, 00:14:18.136 "immediate_data": true, 00:14:18.136 "allow_duplicated_isid": false, 00:14:18.136 "error_recovery_level": 0, 00:14:18.136 "nop_timeout": 60, 00:14:18.136 "nop_in_interval": 30, 00:14:18.136 "disable_chap": false, 00:14:18.136 "require_chap": false, 00:14:18.136 "mutual_chap": false, 00:14:18.136 "chap_group": 0, 00:14:18.136 "max_large_datain_per_connection": 64, 00:14:18.136 "max_r2t_per_connection": 4, 00:14:18.136 "pdu_pool_size": 36864, 00:14:18.136 "immediate_data_pool_size": 16384, 00:14:18.136 "data_out_pool_size": 2048 00:14:18.136 } 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 } 00:14:18.136 ] 00:14:18.136 }' 00:14:18.396 [2024-12-08 06:01:41.269164] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:18.396 [2024-12-08 06:01:41.269368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83578 ] 00:14:18.396 [2024-12-08 06:01:41.418318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.677 [2024-12-08 06:01:41.463726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.937 [2024-12-08 06:01:41.771216] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:18.937 [2024-12-08 06:01:41.771598] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:18.937 [2024-12-08 06:01:41.779374] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:18.937 [2024-12-08 06:01:41.779450] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:18.937 [2024-12-08 06:01:41.779463] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:18.937 [2024-12-08 06:01:41.779471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:18.937 [2024-12-08 06:01:41.788313] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:18.937 [2024-12-08 06:01:41.788343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:18.937 [2024-12-08 06:01:41.794308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:18.937 [2024-12-08 06:01:41.794434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:18.937 [2024-12-08 06:01:41.810228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83578 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83578 ']' 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83578 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.196 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83578 00:14:19.456 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.456 killing process with pid 83578 00:14:19.456 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.456 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83578' 00:14:19.456 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83578 00:14:19.456 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83578 00:14:19.456 [2024-12-08 06:01:42.455892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:19.456 [2024-12-08 06:01:42.492336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:19.456 [2024-12-08 06:01:42.492487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:19.715 [2024-12-08 06:01:42.500222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:19.715 [2024-12-08 06:01:42.500283] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:19.715 [2024-12-08 06:01:42.500297] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:19.715 [2024-12-08 06:01:42.500338] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:19.715 [2024-12-08 06:01:42.500511] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:19.974 06:01:42 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:19.974 00:14:19.974 real 0m3.803s 00:14:19.974 user 0m3.085s 00:14:19.974 sys 0m1.679s 00:14:19.974 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.974 ************************************ 00:14:19.974 06:01:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:19.974 END TEST test_save_ublk_config 00:14:19.974 ************************************ 00:14:19.974 06:01:42 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83637 00:14:19.974 06:01:42 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:19.974 06:01:42 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.974 06:01:42 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83637 00:14:19.974 06:01:42 ublk -- common/autotest_common.sh@831 -- # '[' -z 83637 ']' 00:14:19.974 06:01:42 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.974 06:01:42 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.974 06:01:42 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.974 06:01:42 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.974 06:01:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:20.234 [2024-12-08 06:01:43.043948] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:20.234 [2024-12-08 06:01:43.044152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83637 ] 00:14:20.234 [2024-12-08 06:01:43.192087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.234 [2024-12-08 06:01:43.228080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.234 [2024-12-08 06:01:43.228126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.172 06:01:44 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.172 06:01:44 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:21.172 06:01:44 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:21.172 06:01:44 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:21.172 06:01:44 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.172 06:01:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 ************************************ 00:14:21.172 START TEST test_create_ublk 00:14:21.172 ************************************ 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 [2024-12-08 06:01:44.038345] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.172 [2024-12-08 06:01:44.039519] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 [2024-12-08 06:01:44.101422] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:21.172 [2024-12-08 06:01:44.101922] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:21.172 [2024-12-08 06:01:44.101947] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:21.172 [2024-12-08 06:01:44.101973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.172 [2024-12-08 06:01:44.110482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.172 [2024-12-08 06:01:44.110542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.172 [2024-12-08 06:01:44.116274] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.172 [2024-12-08 06:01:44.117149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:21.172 [2024-12-08 06:01:44.132238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 06:01:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:21.172 { 00:14:21.172 "ublk_device": "/dev/ublkb0", 00:14:21.172 "id": 0, 00:14:21.172 "queue_depth": 512, 00:14:21.172 "num_queues": 4, 00:14:21.172 "bdev_name": "Malloc0" 00:14:21.172 } 00:14:21.172 ]' 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:21.172 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:21.432 06:01:44 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:21.432 06:01:44 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:21.691 fio: verification read phase will never start because write phase uses all of runtime 00:14:21.691 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:21.691 fio-3.35 00:14:21.691 Starting 1 process 00:14:31.672 00:14:31.672 fio_test: (groupid=0, jobs=1): err= 0: pid=83682: Sun Dec 8 06:01:54 2024 00:14:31.672 write: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(438MiB/10001msec); 0 zone resets 00:14:31.672 clat (usec): min=54, max=3965, avg=87.85, stdev=136.72 00:14:31.672 lat (usec): min=55, max=3965, avg=88.52, stdev=136.74 00:14:31.672 clat percentiles (usec): 00:14:31.672 | 1.00th=[ 62], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 74], 00:14:31.672 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 78], 00:14:31.672 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 97], 95.00th=[ 105], 00:14:31.672 | 99.00th=[ 127], 99.50th=[ 165], 99.90th=[ 2835], 99.95th=[ 3163], 00:14:31.672 | 99.99th=[ 3752] 00:14:31.672 bw ( KiB/s): min=43968, max=47808, per=100.00%, avg=44931.79, stdev=768.46, samples=19 00:14:31.672 iops : min=10992, max=11952, avg=11232.95, stdev=192.11, samples=19 00:14:31.672 lat (usec) : 100=92.24%, 250=7.36%, 500=0.04%, 750=0.01%, 1000=0.03% 00:14:31.672 lat (msec) : 2=0.13%, 4=0.19% 00:14:31.672 cpu : usr=3.14%, sys=7.40%, ctx=112227, majf=0, minf=797 00:14:31.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.672 issued rwts: total=0,112231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:31.672 00:14:31.672 Run status group 0 (all jobs): 00:14:31.672 WRITE: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=438MiB (460MB), run=10001-10001msec 00:14:31.672 00:14:31.672 Disk stats (read/write): 00:14:31.672 ublkb0: ios=0/111077, merge=0/0, ticks=0/8922, in_queue=8922, util=99.10% 00:14:31.672 06:01:54 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.672 [2024-12-08 06:01:54.644098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:31.672 [2024-12-08 06:01:54.694285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:31.672 [2024-12-08 06:01:54.695293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:31.672 [2024-12-08 06:01:54.704285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:31.672 [2024-12-08 06:01:54.704654] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:31.672 [2024-12-08 06:01:54.704677] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.672 06:01:54 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.672 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 [2024-12-08 06:01:54.719380] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:31.930 request: 00:14:31.930 { 00:14:31.930 "ublk_id": 0, 00:14:31.930 "method": "ublk_stop_disk", 00:14:31.930 "req_id": 1 00:14:31.930 } 00:14:31.930 Got JSON-RPC error response 00:14:31.930 response: 00:14:31.930 { 00:14:31.930 "code": -19, 00:14:31.930 "message": "No such device" 00:14:31.930 } 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.930 06:01:54 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 [2024-12-08 06:01:54.731375] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.930 [2024-12-08 06:01:54.733380] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:31.930 [2024-12-08 06:01:54.733425] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.930 06:01:54 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.930 06:01:54 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:31.930 06:01:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:31.930 00:14:31.930 real 0m10.893s 00:14:31.930 user 0m0.746s 00:14:31.930 sys 0m0.846s 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.930 06:01:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 ************************************ 00:14:31.930 END TEST test_create_ublk 00:14:31.930 ************************************ 00:14:31.930 06:01:54 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:31.930 06:01:54 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:31.930 06:01:54 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.930 06:01:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.930 ************************************ 00:14:31.930 START TEST test_create_multi_ublk 00:14:31.930 ************************************ 00:14:31.930 06:01:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:31.930 06:01:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:31.931 06:01:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.931 06:01:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 [2024-12-08 06:01:54.980271] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:32.189 [2024-12-08 06:01:54.981325] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.189 06:01:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 [2024-12-08 06:01:55.056503] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:32.189 [2024-12-08 06:01:55.056992] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:32.189 [2024-12-08 06:01:55.057019] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:32.189 [2024-12-08 06:01:55.057030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.189 [2024-12-08 06:01:55.068343] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.189 [2024-12-08 06:01:55.068374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.189 [2024-12-08 06:01:55.083306] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.189 [2024-12-08 06:01:55.084022] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:32.189 [2024-12-08 06:01:55.122236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.189 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 [2024-12-08 06:01:55.202579] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:32.189 [2024-12-08 06:01:55.203095] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:32.189 [2024-12-08 06:01:55.203120] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:32.189 [2024-12-08 06:01:55.203132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.189 [2024-12-08 06:01:55.213246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.189 [2024-12-08 06:01:55.213282] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.189 [2024-12-08 06:01:55.224289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.189 [2024-12-08 06:01:55.225019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:32.446 [2024-12-08 06:01:55.248267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.446 [2024-12-08 06:01:55.340444] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:32.446 [2024-12-08 06:01:55.340953] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:32.446 [2024-12-08 06:01:55.340980] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:32.446 [2024-12-08 06:01:55.340990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.446 [2024-12-08 06:01:55.352237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.446 [2024-12-08 06:01:55.352267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.446 [2024-12-08 06:01:55.364320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.446 [2024-12-08 06:01:55.365161] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:32.446 [2024-12-08 06:01:55.377250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.446 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.446 [2024-12-08 06:01:55.459816] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:32.446 [2024-12-08 06:01:55.460390] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:32.447 [2024-12-08 06:01:55.460416] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:32.447 [2024-12-08 06:01:55.460430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.447 [2024-12-08 06:01:55.471280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.447 [2024-12-08 06:01:55.471318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.447 [2024-12-08 06:01:55.481257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.447 [2024-12-08 06:01:55.481995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:32.765 [2024-12-08 06:01:55.493298] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:32.765 { 00:14:32.765 "ublk_device": "/dev/ublkb0", 00:14:32.765 "id": 0, 00:14:32.765 "queue_depth": 512, 00:14:32.765 "num_queues": 4, 00:14:32.765 "bdev_name": "Malloc0" 00:14:32.765 }, 00:14:32.765 { 00:14:32.765 "ublk_device": "/dev/ublkb1", 00:14:32.765 "id": 1, 00:14:32.765 "queue_depth": 512, 00:14:32.765 "num_queues": 4, 00:14:32.765 "bdev_name": "Malloc1" 00:14:32.765 }, 00:14:32.765 { 00:14:32.765 "ublk_device": "/dev/ublkb2", 00:14:32.765 "id": 2, 00:14:32.765 "queue_depth": 512, 00:14:32.765 "num_queues": 4, 00:14:32.765 "bdev_name": "Malloc2" 00:14:32.765 }, 00:14:32.765 { 00:14:32.765 "ublk_device": "/dev/ublkb3", 00:14:32.765 "id": 3, 00:14:32.765 "queue_depth": 512, 00:14:32.765 "num_queues": 4, 00:14:32.765 "bdev_name": "Malloc3" 00:14:32.765 } 00:14:32.765 ]' 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:32.765 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:33.037 06:01:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:33.037 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:33.037 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.037 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.296 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.555 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:33.555 [2024-12-08 06:01:56.556531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:33.555 [2024-12-08 06:01:56.596318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:33.555 [2024-12-08 06:01:56.597481] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:33.814 [2024-12-08 06:01:56.605234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:33.814 [2024-12-08 06:01:56.605645] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:33.814 [2024-12-08 06:01:56.605675] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:33.814 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.814 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.814 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:33.814 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.814 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:33.815 [2024-12-08 06:01:56.616391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:33.815 [2024-12-08 06:01:56.662332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:33.815 [2024-12-08 06:01:56.663394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:33.815 [2024-12-08 06:01:56.670230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:33.815 [2024-12-08 06:01:56.670557] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:33.815 [2024-12-08 06:01:56.670578] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:33.815 [2024-12-08 06:01:56.685433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:33.815 [2024-12-08 06:01:56.722317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:33.815 [2024-12-08 06:01:56.723411] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:33.815 [2024-12-08 06:01:56.731212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:33.815 [2024-12-08 06:01:56.731628] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:33.815 [2024-12-08 06:01:56.731664] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:33.815 [2024-12-08 06:01:56.739398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:33.815 [2024-12-08 06:01:56.774680] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:33.815 [2024-12-08 06:01:56.775811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:33.815 [2024-12-08 06:01:56.784213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:33.815 [2024-12-08 06:01:56.784605] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:33.815 [2024-12-08 06:01:56.784634] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.815 06:01:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:34.074 [2024-12-08 06:01:57.068332] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:34.074 [2024-12-08 06:01:57.069803] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:34.074 [2024-12-08 06:01:57.069862] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:34.074 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:34.074 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:34.074 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:34.074 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.074 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:34.334 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:34.593 00:14:34.593 real 0m2.521s 00:14:34.593 user 0m1.305s 00:14:34.593 sys 0m0.178s 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.593 ************************************ 00:14:34.593 END TEST test_create_multi_ublk 00:14:34.593 06:01:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.593 ************************************ 00:14:34.593 06:01:57 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:34.593 06:01:57 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:34.593 06:01:57 ublk -- ublk/ublk.sh@130 -- # killprocess 83637 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@950 -- # '[' -z 83637 ']' 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@954 -- # kill -0 83637 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@955 -- # uname 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83637 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.593 killing process with pid 83637 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83637' 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@969 -- # kill 83637 00:14:34.593 06:01:57 ublk -- common/autotest_common.sh@974 -- # wait 83637 00:14:34.853 [2024-12-08 06:01:57.764391] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:34.853 [2024-12-08 06:01:57.764495] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:35.112 00:14:35.112 real 0m19.121s 00:14:35.112 user 0m30.342s 00:14:35.112 sys 0m7.864s 00:14:35.112 06:01:57 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.112 06:01:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.112 ************************************ 00:14:35.112 END TEST ublk 00:14:35.112 ************************************ 00:14:35.112 06:01:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:35.112 06:01:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:35.112 06:01:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.112 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:14:35.112 ************************************ 00:14:35.112 START TEST ublk_recovery 00:14:35.112 ************************************ 00:14:35.112 06:01:58 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:35.112 * Looking for test storage... 00:14:35.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:35.112 06:01:58 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:35.112 06:01:58 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:35.112 06:01:58 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.372 06:01:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:35.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.372 --rc genhtml_branch_coverage=1 00:14:35.372 --rc genhtml_function_coverage=1 00:14:35.372 --rc genhtml_legend=1 00:14:35.372 --rc geninfo_all_blocks=1 00:14:35.372 --rc geninfo_unexecuted_blocks=1 00:14:35.372 00:14:35.372 ' 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:35.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.372 --rc genhtml_branch_coverage=1 00:14:35.372 --rc genhtml_function_coverage=1 00:14:35.372 --rc genhtml_legend=1 00:14:35.372 --rc geninfo_all_blocks=1 00:14:35.372 --rc geninfo_unexecuted_blocks=1 00:14:35.372 00:14:35.372 ' 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:35.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.372 --rc genhtml_branch_coverage=1 00:14:35.372 --rc genhtml_function_coverage=1 00:14:35.372 --rc genhtml_legend=1 00:14:35.372 --rc geninfo_all_blocks=1 00:14:35.372 --rc geninfo_unexecuted_blocks=1 00:14:35.372 00:14:35.372 ' 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:35.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.372 --rc genhtml_branch_coverage=1 00:14:35.372 --rc genhtml_function_coverage=1 00:14:35.372 --rc genhtml_legend=1 00:14:35.372 --rc geninfo_all_blocks=1 00:14:35.372 --rc geninfo_unexecuted_blocks=1 00:14:35.372 00:14:35.372 ' 00:14:35.372 06:01:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:35.372 06:01:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:35.372 06:01:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:35.372 06:01:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84006 00:14:35.372 06:01:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:35.372 06:01:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:35.372 06:01:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84006 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84006 ']' 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.372 06:01:58 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.373 06:01:58 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.373 06:01:58 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.373 06:01:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:35.373 [2024-12-08 06:01:58.333625] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:35.373 [2024-12-08 06:01:58.333810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84006 ] 00:14:35.632 [2024-12-08 06:01:58.479353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:35.632 [2024-12-08 06:01:58.517327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.632 [2024-12-08 06:01:58.517352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:36.570 06:01:59 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.570 [2024-12-08 06:01:59.326263] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:36.570 [2024-12-08 06:01:59.327444] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.570 06:01:59 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.570 malloc0 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.570 06:01:59 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.570 [2024-12-08 06:01:59.366348] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:36.570 [2024-12-08 06:01:59.366489] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:36.570 [2024-12-08 06:01:59.366510] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:36.570 [2024-12-08 06:01:59.366522] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:36.570 [2024-12-08 06:01:59.377401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:36.570 [2024-12-08 06:01:59.377433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:36.570 [2024-12-08 06:01:59.384248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:36.570 [2024-12-08 06:01:59.384438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:36.570 [2024-12-08 06:01:59.400248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:36.570 1 00:14:36.570 06:01:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.570 06:01:59 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:37.507 06:02:00 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84039 00:14:37.507 06:02:00 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:37.507 06:02:00 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:37.507 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:37.507 fio-3.35 00:14:37.507 Starting 1 process 00:14:42.779 06:02:05 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84006 00:14:42.779 06:02:05 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:48.049 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84006 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:48.049 06:02:10 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84152 00:14:48.049 06:02:10 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:48.049 06:02:10 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.049 06:02:10 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84152 00:14:48.049 06:02:10 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84152 ']' 00:14:48.049 06:02:10 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.049 06:02:10 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.049 06:02:10 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.049 06:02:10 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.049 06:02:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.049 [2024-12-08 06:02:10.532435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:48.049 [2024-12-08 06:02:10.532623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84152 ] 00:14:48.049 [2024-12-08 06:02:10.678413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:48.049 [2024-12-08 06:02:10.717241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.049 [2024-12-08 06:02:10.717315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:48.614 06:02:11 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.614 [2024-12-08 06:02:11.518250] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:48.614 [2024-12-08 06:02:11.519520] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.614 06:02:11 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.614 malloc0 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.614 06:02:11 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.614 06:02:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.614 [2024-12-08 06:02:11.554885] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:48.614 [2024-12-08 06:02:11.554956] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:48.614 [2024-12-08 06:02:11.554970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:48.615 1 00:14:48.615 06:02:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.615 06:02:11 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84039 00:14:48.615 [2024-12-08 06:02:11.568341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:48.615 [2024-12-08 06:02:11.568364] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:49.549 [2024-12-08 06:02:12.568404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:49.549 [2024-12-08 06:02:12.572320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:49.549 [2024-12-08 06:02:12.572344] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:50.924 [2024-12-08 06:02:13.575309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:50.924 [2024-12-08 06:02:13.583326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:50.924 [2024-12-08 06:02:13.583349] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:51.857 [2024-12-08 06:02:14.584321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:51.857 [2024-12-08 06:02:14.592281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:51.857 [2024-12-08 06:02:14.592308] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:51.857 [2024-12-08 06:02:14.592320] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:51.857 [2024-12-08 06:02:14.592441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:13.819 [2024-12-08 06:02:35.675251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:13.819 [2024-12-08 06:02:35.679714] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:13.819 [2024-12-08 06:02:35.688553] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:13.819 [2024-12-08 06:02:35.688583] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:40.372 00:15:40.372 fio_test: (groupid=0, jobs=1): err= 0: pid=84042: Sun Dec 8 06:03:00 2024 00:15:40.372 read: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(2444MiB/60002msec) 00:15:40.372 slat (usec): min=2, max=170, avg= 5.89, stdev= 2.54 00:15:40.372 clat (usec): min=1197, max=30287k, avg=5791.85, stdev=291563.88 00:15:40.372 lat (usec): min=1202, max=30287k, avg=5797.74, stdev=291563.88 00:15:40.372 clat percentiles (usec): 00:15:40.372 | 1.00th=[ 2442], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:15:40.372 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2900], 00:15:40.372 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3130], 95.00th=[ 4015], 00:15:40.372 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 8586], 99.95th=[12780], 00:15:40.372 | 99.99th=[13960] 00:15:40.372 bw ( KiB/s): min=38280, max=88432, per=100.00%, avg=83568.00, stdev=9596.01, samples=59 00:15:40.372 iops : min= 9570, max=22108, avg=20892.00, stdev=2399.00, samples=59 00:15:40.372 write: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(2441MiB/60002msec); 0 zone resets 00:15:40.372 slat (usec): min=2, max=168, avg= 6.15, stdev= 2.58 00:15:40.372 clat (usec): min=791, max=30288k, avg=6476.76, stdev=320468.34 00:15:40.372 lat (usec): min=797, max=30288k, avg=6482.91, stdev=320468.33 00:15:40.372 clat percentiles (msec): 00:15:40.372 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:15:40.372 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:15:40.372 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:15:40.372 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 13], 00:15:40.372 | 99.99th=[17113] 00:15:40.372 bw ( KiB/s): min=37392, max=88160, per=100.00%, avg=83480.54, stdev=9600.51, samples=59 00:15:40.372 iops : min= 9348, max=22040, avg=20870.14, stdev=2400.13, samples=59 00:15:40.372 lat (usec) : 1000=0.01% 00:15:40.372 lat (msec) : 2=0.14%, 4=94.94%, 10=4.84%, 20=0.07%, >=2000=0.01% 00:15:40.372 cpu : usr=5.45%, sys=11.61%, ctx=39635, majf=0, minf=13 00:15:40.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:40.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.372 issued rwts: total=625641,625007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.372 00:15:40.372 Run status group 0 (all jobs): 00:15:40.372 READ: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=2444MiB (2563MB), run=60002-60002msec 00:15:40.372 WRITE: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=2441MiB (2560MB), run=60002-60002msec 00:15:40.372 00:15:40.372 Disk stats (read/write): 00:15:40.372 ublkb1: ios=623365/622732, merge=0/0, ticks=3566050/3924309, in_queue=7490359, util=99.92% 00:15:40.372 06:03:00 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 [2024-12-08 06:03:00.674109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.372 [2024-12-08 06:03:00.714305] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.372 [2024-12-08 06:03:00.714612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.372 [2024-12-08 06:03:00.722318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.372 [2024-12-08 06:03:00.722498] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:40.372 [2024-12-08 06:03:00.722524] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.372 06:03:00 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 [2024-12-08 06:03:00.738446] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:40.372 [2024-12-08 06:03:00.740595] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:40.372 [2024-12-08 06:03:00.740662] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.372 06:03:00 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:40.372 06:03:00 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:40.372 06:03:00 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84152 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 84152 ']' 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 84152 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84152 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.372 killing process with pid 84152 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84152' 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@969 -- # kill 84152 00:15:40.372 06:03:00 ublk_recovery -- common/autotest_common.sh@974 -- # wait 84152 00:15:40.372 [2024-12-08 06:03:01.001137] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:40.372 [2024-12-08 06:03:01.001240] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:40.372 00:15:40.372 real 1m3.410s 00:15:40.372 user 1m47.031s 00:15:40.372 sys 0m19.640s 00:15:40.372 06:03:01 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.372 06:03:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 ************************************ 00:15:40.372 END TEST ublk_recovery 00:15:40.372 ************************************ 00:15:40.372 06:03:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:40.372 06:03:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.372 06:03:01 -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 06:03:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:40.372 06:03:01 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:40.372 06:03:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:40.372 06:03:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.372 06:03:01 -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 ************************************ 00:15:40.372 START TEST ftl 00:15:40.372 ************************************ 00:15:40.372 06:03:01 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:40.372 * Looking for test storage... 00:15:40.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.372 06:03:01 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:40.372 06:03:01 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:40.372 06:03:01 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:40.372 06:03:01 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:40.372 06:03:01 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.372 06:03:01 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.372 06:03:01 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.372 06:03:01 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.372 06:03:01 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.372 06:03:01 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.372 06:03:01 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.372 06:03:01 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.372 06:03:01 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.372 06:03:01 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:40.372 06:03:01 ftl -- scripts/common.sh@345 -- # : 1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.372 06:03:01 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.372 06:03:01 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@353 -- # local d=1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.372 06:03:01 ftl -- scripts/common.sh@355 -- # echo 1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.372 06:03:01 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:40.372 06:03:01 ftl -- scripts/common.sh@353 -- # local d=2 00:15:40.372 06:03:01 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.372 06:03:01 ftl -- scripts/common.sh@355 -- # echo 2 00:15:40.373 06:03:01 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.373 06:03:01 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.373 06:03:01 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.373 06:03:01 ftl -- scripts/common.sh@368 -- # return 0 00:15:40.373 06:03:01 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.373 06:03:01 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:40.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.373 --rc genhtml_branch_coverage=1 00:15:40.373 --rc genhtml_function_coverage=1 00:15:40.373 --rc genhtml_legend=1 00:15:40.373 --rc geninfo_all_blocks=1 00:15:40.373 --rc geninfo_unexecuted_blocks=1 00:15:40.373 00:15:40.373 ' 00:15:40.373 06:03:01 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:40.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.373 --rc genhtml_branch_coverage=1 00:15:40.373 --rc genhtml_function_coverage=1 00:15:40.373 --rc genhtml_legend=1 00:15:40.373 --rc geninfo_all_blocks=1 00:15:40.373 --rc geninfo_unexecuted_blocks=1 00:15:40.373 00:15:40.373 ' 00:15:40.373 06:03:01 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:40.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.373 --rc genhtml_branch_coverage=1 00:15:40.373 --rc genhtml_function_coverage=1 00:15:40.373 --rc genhtml_legend=1 00:15:40.373 --rc geninfo_all_blocks=1 00:15:40.373 --rc geninfo_unexecuted_blocks=1 00:15:40.373 00:15:40.373 ' 00:15:40.373 06:03:01 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:40.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.373 --rc genhtml_branch_coverage=1 00:15:40.373 --rc genhtml_function_coverage=1 00:15:40.373 --rc genhtml_legend=1 00:15:40.373 --rc geninfo_all_blocks=1 00:15:40.373 --rc geninfo_unexecuted_blocks=1 00:15:40.373 00:15:40.373 ' 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:40.373 06:03:01 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:40.373 06:03:01 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.373 06:03:01 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.373 06:03:01 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:40.373 06:03:01 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:40.373 06:03:01 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.373 06:03:01 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:40.373 06:03:01 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:40.373 06:03:01 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.373 06:03:01 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.373 06:03:01 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:40.373 06:03:01 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:40.373 06:03:01 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.373 06:03:01 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.373 06:03:01 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:40.373 06:03:01 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:40.373 06:03:01 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.373 06:03:01 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.373 06:03:01 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:40.373 06:03:01 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:40.373 06:03:01 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.373 06:03:01 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.373 06:03:01 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.373 06:03:01 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.373 06:03:01 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:40.373 06:03:01 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:40.373 06:03:01 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.373 06:03:01 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:40.373 06:03:01 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:40.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:40.373 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.373 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.373 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.373 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.373 06:03:02 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84939 00:15:40.373 06:03:02 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:40.373 06:03:02 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84939 00:15:40.373 06:03:02 ftl -- common/autotest_common.sh@831 -- # '[' -z 84939 ']' 00:15:40.373 06:03:02 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.373 06:03:02 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.373 06:03:02 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.373 06:03:02 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.373 06:03:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:40.373 [2024-12-08 06:03:02.402757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:40.373 [2024-12-08 06:03:02.403498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84939 ] 00:15:40.373 [2024-12-08 06:03:02.556719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.373 [2024-12-08 06:03:02.598808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.633 06:03:03 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.633 06:03:03 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:40.633 06:03:03 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:40.891 06:03:03 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:41.150 06:03:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:41.150 06:03:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:41.718 06:03:04 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:41.718 06:03:04 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:41.718 06:03:04 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@50 -- # break 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@63 -- # break 00:15:42.284 06:03:05 ftl -- ftl/ftl.sh@66 -- # killprocess 84939 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@950 -- # '[' -z 84939 ']' 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@954 -- # kill -0 84939 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@955 -- # uname 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84939 00:15:42.284 killing process with pid 84939 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84939' 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@969 -- # kill 84939 00:15:42.284 06:03:05 ftl -- common/autotest_common.sh@974 -- # wait 84939 00:15:42.851 06:03:05 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:42.851 06:03:05 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:42.851 06:03:05 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:42.851 06:03:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.851 06:03:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:42.851 ************************************ 00:15:42.851 START TEST ftl_fio_basic 00:15:42.851 ************************************ 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:42.851 * Looking for test storage... 00:15:42.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.851 --rc genhtml_branch_coverage=1 00:15:42.851 --rc genhtml_function_coverage=1 00:15:42.851 --rc genhtml_legend=1 00:15:42.851 --rc geninfo_all_blocks=1 00:15:42.851 --rc geninfo_unexecuted_blocks=1 00:15:42.851 00:15:42.851 ' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.851 --rc genhtml_branch_coverage=1 00:15:42.851 --rc genhtml_function_coverage=1 00:15:42.851 --rc genhtml_legend=1 00:15:42.851 --rc geninfo_all_blocks=1 00:15:42.851 --rc geninfo_unexecuted_blocks=1 00:15:42.851 00:15:42.851 ' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.851 --rc genhtml_branch_coverage=1 00:15:42.851 --rc genhtml_function_coverage=1 00:15:42.851 --rc genhtml_legend=1 00:15:42.851 --rc geninfo_all_blocks=1 00:15:42.851 --rc geninfo_unexecuted_blocks=1 00:15:42.851 00:15:42.851 ' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.851 --rc genhtml_branch_coverage=1 00:15:42.851 --rc genhtml_function_coverage=1 00:15:42.851 --rc genhtml_legend=1 00:15:42.851 --rc geninfo_all_blocks=1 00:15:42.851 --rc geninfo_unexecuted_blocks=1 00:15:42.851 00:15:42.851 ' 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:42.851 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85061 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85061 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 85061 ']' 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.852 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:43.110 [2024-12-08 06:03:05.977937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:43.110 [2024-12-08 06:03:05.978148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85061 ] 00:15:43.110 [2024-12-08 06:03:06.124751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.369 [2024-12-08 06:03:06.163126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.369 [2024-12-08 06:03:06.163238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.369 [2024-12-08 06:03:06.163316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:43.935 06:03:06 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:44.501 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:44.759 { 00:15:44.759 "name": "nvme0n1", 00:15:44.759 "aliases": [ 00:15:44.759 "add82398-93db-4f81-acd0-486ee08b37fc" 00:15:44.759 ], 00:15:44.759 "product_name": "NVMe disk", 00:15:44.759 "block_size": 4096, 00:15:44.759 "num_blocks": 1310720, 00:15:44.759 "uuid": "add82398-93db-4f81-acd0-486ee08b37fc", 00:15:44.759 "numa_id": -1, 00:15:44.759 "assigned_rate_limits": { 00:15:44.759 "rw_ios_per_sec": 0, 00:15:44.759 "rw_mbytes_per_sec": 0, 00:15:44.759 "r_mbytes_per_sec": 0, 00:15:44.759 "w_mbytes_per_sec": 0 00:15:44.759 }, 00:15:44.759 "claimed": false, 00:15:44.759 "zoned": false, 00:15:44.759 "supported_io_types": { 00:15:44.759 "read": true, 00:15:44.759 "write": true, 00:15:44.759 "unmap": true, 00:15:44.759 "flush": true, 00:15:44.759 "reset": true, 00:15:44.759 "nvme_admin": true, 00:15:44.759 "nvme_io": true, 00:15:44.759 "nvme_io_md": false, 00:15:44.759 "write_zeroes": true, 00:15:44.759 "zcopy": false, 00:15:44.759 "get_zone_info": false, 00:15:44.759 "zone_management": false, 00:15:44.759 "zone_append": false, 00:15:44.759 "compare": true, 00:15:44.759 "compare_and_write": false, 00:15:44.759 "abort": true, 00:15:44.759 "seek_hole": false, 00:15:44.759 "seek_data": false, 00:15:44.759 "copy": true, 00:15:44.759 "nvme_iov_md": false 00:15:44.759 }, 00:15:44.759 "driver_specific": { 00:15:44.759 "nvme": [ 00:15:44.759 { 00:15:44.759 "pci_address": "0000:00:11.0", 00:15:44.759 "trid": { 00:15:44.759 "trtype": "PCIe", 00:15:44.759 "traddr": "0000:00:11.0" 00:15:44.759 }, 00:15:44.759 "ctrlr_data": { 00:15:44.759 "cntlid": 0, 00:15:44.759 "vendor_id": "0x1b36", 00:15:44.759 "model_number": "QEMU NVMe Ctrl", 00:15:44.759 "serial_number": "12341", 00:15:44.759 "firmware_revision": "8.0.0", 00:15:44.759 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:44.759 "oacs": { 00:15:44.759 "security": 0, 00:15:44.759 "format": 1, 00:15:44.759 "firmware": 0, 00:15:44.759 "ns_manage": 1 00:15:44.759 }, 00:15:44.759 "multi_ctrlr": false, 00:15:44.759 "ana_reporting": false 00:15:44.759 }, 00:15:44.759 "vs": { 00:15:44.759 "nvme_version": "1.4" 00:15:44.759 }, 00:15:44.759 "ns_data": { 00:15:44.759 "id": 1, 00:15:44.759 "can_share": false 00:15:44.759 } 00:15:44.759 } 00:15:44.759 ], 00:15:44.759 "mp_policy": "active_passive" 00:15:44.759 } 00:15:44.759 } 00:15:44.759 ]' 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:44.759 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:45.017 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:45.017 06:03:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:45.274 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=eef32056-5049-4d44-9d40-78044cb79297 00:15:45.274 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eef32056-5049-4d44-9d40-78044cb79297 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:45.531 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:45.789 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:45.789 { 00:15:45.789 "name": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:45.789 "aliases": [ 00:15:45.789 "lvs/nvme0n1p0" 00:15:45.789 ], 00:15:45.789 "product_name": "Logical Volume", 00:15:45.789 "block_size": 4096, 00:15:45.789 "num_blocks": 26476544, 00:15:45.789 "uuid": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:45.789 "assigned_rate_limits": { 00:15:45.789 "rw_ios_per_sec": 0, 00:15:45.789 "rw_mbytes_per_sec": 0, 00:15:45.789 "r_mbytes_per_sec": 0, 00:15:45.789 "w_mbytes_per_sec": 0 00:15:45.789 }, 00:15:45.789 "claimed": false, 00:15:45.789 "zoned": false, 00:15:45.789 "supported_io_types": { 00:15:45.789 "read": true, 00:15:45.789 "write": true, 00:15:45.789 "unmap": true, 00:15:45.789 "flush": false, 00:15:45.789 "reset": true, 00:15:45.789 "nvme_admin": false, 00:15:45.789 "nvme_io": false, 00:15:45.789 "nvme_io_md": false, 00:15:45.789 "write_zeroes": true, 00:15:45.789 "zcopy": false, 00:15:45.789 "get_zone_info": false, 00:15:45.789 "zone_management": false, 00:15:45.789 "zone_append": false, 00:15:45.789 "compare": false, 00:15:45.789 "compare_and_write": false, 00:15:45.789 "abort": false, 00:15:45.789 "seek_hole": true, 00:15:45.789 "seek_data": true, 00:15:45.789 "copy": false, 00:15:45.789 "nvme_iov_md": false 00:15:45.789 }, 00:15:45.789 "driver_specific": { 00:15:45.789 "lvol": { 00:15:45.789 "lvol_store_uuid": "eef32056-5049-4d44-9d40-78044cb79297", 00:15:45.789 "base_bdev": "nvme0n1", 00:15:45.789 "thin_provision": true, 00:15:45.789 "num_allocated_clusters": 0, 00:15:45.789 "snapshot": false, 00:15:45.789 "clone": false, 00:15:45.789 "esnap_clone": false 00:15:45.789 } 00:15:45.789 } 00:15:45.789 } 00:15:45.789 ]' 00:15:45.789 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:45.789 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:45.789 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:46.048 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:46.048 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:46.048 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:46.048 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:46.048 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:46.048 06:03:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:46.306 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:46.564 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:46.564 { 00:15:46.564 "name": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:46.564 "aliases": [ 00:15:46.564 "lvs/nvme0n1p0" 00:15:46.564 ], 00:15:46.564 "product_name": "Logical Volume", 00:15:46.564 "block_size": 4096, 00:15:46.564 "num_blocks": 26476544, 00:15:46.564 "uuid": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:46.564 "assigned_rate_limits": { 00:15:46.564 "rw_ios_per_sec": 0, 00:15:46.564 "rw_mbytes_per_sec": 0, 00:15:46.564 "r_mbytes_per_sec": 0, 00:15:46.564 "w_mbytes_per_sec": 0 00:15:46.564 }, 00:15:46.564 "claimed": false, 00:15:46.564 "zoned": false, 00:15:46.564 "supported_io_types": { 00:15:46.565 "read": true, 00:15:46.565 "write": true, 00:15:46.565 "unmap": true, 00:15:46.565 "flush": false, 00:15:46.565 "reset": true, 00:15:46.565 "nvme_admin": false, 00:15:46.565 "nvme_io": false, 00:15:46.565 "nvme_io_md": false, 00:15:46.565 "write_zeroes": true, 00:15:46.565 "zcopy": false, 00:15:46.565 "get_zone_info": false, 00:15:46.565 "zone_management": false, 00:15:46.565 "zone_append": false, 00:15:46.565 "compare": false, 00:15:46.565 "compare_and_write": false, 00:15:46.565 "abort": false, 00:15:46.565 "seek_hole": true, 00:15:46.565 "seek_data": true, 00:15:46.565 "copy": false, 00:15:46.565 "nvme_iov_md": false 00:15:46.565 }, 00:15:46.565 "driver_specific": { 00:15:46.565 "lvol": { 00:15:46.565 "lvol_store_uuid": "eef32056-5049-4d44-9d40-78044cb79297", 00:15:46.565 "base_bdev": "nvme0n1", 00:15:46.565 "thin_provision": true, 00:15:46.565 "num_allocated_clusters": 0, 00:15:46.565 "snapshot": false, 00:15:46.565 "clone": false, 00:15:46.565 "esnap_clone": false 00:15:46.565 } 00:15:46.565 } 00:15:46.565 } 00:15:46.565 ]' 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:46.565 06:03:09 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:46.823 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:46.823 06:03:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93cffc70-bd3a-46c9-ab5b-3523dac32dee 00:15:47.129 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:47.129 { 00:15:47.129 "name": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:47.129 "aliases": [ 00:15:47.129 "lvs/nvme0n1p0" 00:15:47.129 ], 00:15:47.129 "product_name": "Logical Volume", 00:15:47.129 "block_size": 4096, 00:15:47.129 "num_blocks": 26476544, 00:15:47.129 "uuid": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:47.129 "assigned_rate_limits": { 00:15:47.129 "rw_ios_per_sec": 0, 00:15:47.129 "rw_mbytes_per_sec": 0, 00:15:47.129 "r_mbytes_per_sec": 0, 00:15:47.129 "w_mbytes_per_sec": 0 00:15:47.129 }, 00:15:47.129 "claimed": false, 00:15:47.129 "zoned": false, 00:15:47.129 "supported_io_types": { 00:15:47.129 "read": true, 00:15:47.129 "write": true, 00:15:47.129 "unmap": true, 00:15:47.129 "flush": false, 00:15:47.129 "reset": true, 00:15:47.129 "nvme_admin": false, 00:15:47.129 "nvme_io": false, 00:15:47.129 "nvme_io_md": false, 00:15:47.129 "write_zeroes": true, 00:15:47.129 "zcopy": false, 00:15:47.129 "get_zone_info": false, 00:15:47.129 "zone_management": false, 00:15:47.129 "zone_append": false, 00:15:47.129 "compare": false, 00:15:47.129 "compare_and_write": false, 00:15:47.129 "abort": false, 00:15:47.129 "seek_hole": true, 00:15:47.129 "seek_data": true, 00:15:47.129 "copy": false, 00:15:47.129 "nvme_iov_md": false 00:15:47.129 }, 00:15:47.129 "driver_specific": { 00:15:47.129 "lvol": { 00:15:47.129 "lvol_store_uuid": "eef32056-5049-4d44-9d40-78044cb79297", 00:15:47.129 "base_bdev": "nvme0n1", 00:15:47.129 "thin_provision": true, 00:15:47.129 "num_allocated_clusters": 0, 00:15:47.129 "snapshot": false, 00:15:47.129 "clone": false, 00:15:47.129 "esnap_clone": false 00:15:47.129 } 00:15:47.129 } 00:15:47.129 } 00:15:47.129 ]' 00:15:47.129 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:47.129 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:47.129 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:47.392 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:47.392 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:47.392 06:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:47.392 06:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:47.392 06:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:47.392 06:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 93cffc70-bd3a-46c9-ab5b-3523dac32dee -c nvc0n1p0 --l2p_dram_limit 60 00:15:47.392 [2024-12-08 06:03:10.409156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.409231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:47.392 [2024-12-08 06:03:10.409252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.392 [2024-12-08 06:03:10.409267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.409380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.409406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:47.392 [2024-12-08 06:03:10.409423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:47.392 [2024-12-08 06:03:10.409439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.409496] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:47.392 [2024-12-08 06:03:10.409860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:47.392 [2024-12-08 06:03:10.409891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.409906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:47.392 [2024-12-08 06:03:10.409918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:15:47.392 [2024-12-08 06:03:10.409931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.410111] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ec114949-29a5-4e0c-89f3-eff10424c208 00:15:47.392 [2024-12-08 06:03:10.411323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.411379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:47.392 [2024-12-08 06:03:10.411404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:15:47.392 [2024-12-08 06:03:10.411434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.415970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.416017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:47.392 [2024-12-08 06:03:10.416037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.450 ms 00:15:47.392 [2024-12-08 06:03:10.416048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.416201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.416244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:47.392 [2024-12-08 06:03:10.416260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:15:47.392 [2024-12-08 06:03:10.416287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.416389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.416409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:47.392 [2024-12-08 06:03:10.416443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:47.392 [2024-12-08 06:03:10.416455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.416502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:47.392 [2024-12-08 06:03:10.418043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.418085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:47.392 [2024-12-08 06:03:10.418101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:15:47.392 [2024-12-08 06:03:10.418115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.392 [2024-12-08 06:03:10.418197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.392 [2024-12-08 06:03:10.418235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:47.392 [2024-12-08 06:03:10.418249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:47.393 [2024-12-08 06:03:10.418265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.393 [2024-12-08 06:03:10.418298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:47.393 [2024-12-08 06:03:10.418474] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:47.393 [2024-12-08 06:03:10.418495] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:47.393 [2024-12-08 06:03:10.418513] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:47.393 [2024-12-08 06:03:10.418528] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:47.393 [2024-12-08 06:03:10.418543] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:47.393 [2024-12-08 06:03:10.418556] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:47.393 [2024-12-08 06:03:10.418569] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:47.393 [2024-12-08 06:03:10.418585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:47.393 [2024-12-08 06:03:10.418616] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:47.393 [2024-12-08 06:03:10.418635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.393 [2024-12-08 06:03:10.418650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:47.393 [2024-12-08 06:03:10.418662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:15:47.393 [2024-12-08 06:03:10.418676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.393 [2024-12-08 06:03:10.418791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.393 [2024-12-08 06:03:10.418811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:47.393 [2024-12-08 06:03:10.418824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:47.393 [2024-12-08 06:03:10.418836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.393 [2024-12-08 06:03:10.418956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:47.393 [2024-12-08 06:03:10.418976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:47.393 [2024-12-08 06:03:10.419005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:47.393 [2024-12-08 06:03:10.419043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:47.393 [2024-12-08 06:03:10.419078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.393 [2024-12-08 06:03:10.419101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:47.393 [2024-12-08 06:03:10.419115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:47.393 [2024-12-08 06:03:10.419126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.393 [2024-12-08 06:03:10.419143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:47.393 [2024-12-08 06:03:10.419154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:47.393 [2024-12-08 06:03:10.419167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:47.393 [2024-12-08 06:03:10.419190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:47.393 [2024-12-08 06:03:10.419245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:47.393 [2024-12-08 06:03:10.419287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:47.393 [2024-12-08 06:03:10.419321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:47.393 [2024-12-08 06:03:10.419373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:47.393 [2024-12-08 06:03:10.419407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.393 [2024-12-08 06:03:10.419430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:47.393 [2024-12-08 06:03:10.419443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:47.393 [2024-12-08 06:03:10.419453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.393 [2024-12-08 06:03:10.419466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:47.393 [2024-12-08 06:03:10.419476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:47.393 [2024-12-08 06:03:10.419490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:47.393 [2024-12-08 06:03:10.419513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:47.393 [2024-12-08 06:03:10.419524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419536] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:47.393 [2024-12-08 06:03:10.419548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:47.393 [2024-12-08 06:03:10.419563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.393 [2024-12-08 06:03:10.419588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:47.393 [2024-12-08 06:03:10.419599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:47.393 [2024-12-08 06:03:10.419612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:47.393 [2024-12-08 06:03:10.419623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:47.393 [2024-12-08 06:03:10.419635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:47.393 [2024-12-08 06:03:10.419646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:47.393 [2024-12-08 06:03:10.419664] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:47.393 [2024-12-08 06:03:10.419679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:47.393 [2024-12-08 06:03:10.419708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:47.393 [2024-12-08 06:03:10.419722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:47.393 [2024-12-08 06:03:10.419733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:47.393 [2024-12-08 06:03:10.419758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:47.393 [2024-12-08 06:03:10.419769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:47.393 [2024-12-08 06:03:10.419786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:47.393 [2024-12-08 06:03:10.419797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:47.393 [2024-12-08 06:03:10.419811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:47.393 [2024-12-08 06:03:10.419822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:47.393 [2024-12-08 06:03:10.419887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:47.393 [2024-12-08 06:03:10.419918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:47.393 [2024-12-08 06:03:10.419946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:47.393 [2024-12-08 06:03:10.419959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:47.393 [2024-12-08 06:03:10.419971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:47.393 [2024-12-08 06:03:10.419987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.393 [2024-12-08 06:03:10.419998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:47.393 [2024-12-08 06:03:10.420015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:15:47.394 [2024-12-08 06:03:10.420026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.394 [2024-12-08 06:03:10.420110] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:47.394 [2024-12-08 06:03:10.420128] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:50.679 [2024-12-08 06:03:13.316490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.316805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:50.679 [2024-12-08 06:03:13.316862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2896.379 ms 00:15:50.679 [2024-12-08 06:03:13.316878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.334497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.334557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:50.679 [2024-12-08 06:03:13.334582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.512 ms 00:15:50.679 [2024-12-08 06:03:13.334594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.334738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.334758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:50.679 [2024-12-08 06:03:13.334795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:50.679 [2024-12-08 06:03:13.334807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.345927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.345997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:50.679 [2024-12-08 06:03:13.346028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.021 ms 00:15:50.679 [2024-12-08 06:03:13.346045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.346124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.346145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:50.679 [2024-12-08 06:03:13.346167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:50.679 [2024-12-08 06:03:13.346225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.346703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.346749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:50.679 [2024-12-08 06:03:13.346775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:15:50.679 [2024-12-08 06:03:13.346791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.347024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.347059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:50.679 [2024-12-08 06:03:13.347081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:15:50.679 [2024-12-08 06:03:13.347096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.353071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.353295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:50.679 [2024-12-08 06:03:13.353348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.930 ms 00:15:50.679 [2024-12-08 06:03:13.353362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.362738] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:50.679 [2024-12-08 06:03:13.377046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.377118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:50.679 [2024-12-08 06:03:13.377138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.552 ms 00:15:50.679 [2024-12-08 06:03:13.377151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.428308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.428390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:50.679 [2024-12-08 06:03:13.428432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.069 ms 00:15:50.679 [2024-12-08 06:03:13.428451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.679 [2024-12-08 06:03:13.428684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.679 [2024-12-08 06:03:13.428718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:50.679 [2024-12-08 06:03:13.428732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:15:50.679 [2024-12-08 06:03:13.428770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.432587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.432637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:50.680 [2024-12-08 06:03:13.432656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.757 ms 00:15:50.680 [2024-12-08 06:03:13.432675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.435818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.436016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:50.680 [2024-12-08 06:03:13.436045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:15:50.680 [2024-12-08 06:03:13.436060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.436470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.436503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:50.680 [2024-12-08 06:03:13.436518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:15:50.680 [2024-12-08 06:03:13.436535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.472966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.473052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:50.680 [2024-12-08 06:03:13.473072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.396 ms 00:15:50.680 [2024-12-08 06:03:13.473090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.477338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.477384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:50.680 [2024-12-08 06:03:13.477402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.174 ms 00:15:50.680 [2024-12-08 06:03:13.477432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.481201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.481244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:50.680 [2024-12-08 06:03:13.481260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:15:50.680 [2024-12-08 06:03:13.481272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.485394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.485441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:50.680 [2024-12-08 06:03:13.485459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.073 ms 00:15:50.680 [2024-12-08 06:03:13.485476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.485531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.485553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:50.680 [2024-12-08 06:03:13.485565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:50.680 [2024-12-08 06:03:13.485578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.485677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.680 [2024-12-08 06:03:13.485697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:50.680 [2024-12-08 06:03:13.485709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:15:50.680 [2024-12-08 06:03:13.485721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.680 [2024-12-08 06:03:13.486834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3077.163 ms, result 0 00:15:50.680 { 00:15:50.680 "name": "ftl0", 00:15:50.680 "uuid": "ec114949-29a5-4e0c-89f3-eff10424c208" 00:15:50.680 } 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:50.680 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:50.938 06:03:13 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:51.197 [ 00:15:51.197 { 00:15:51.197 "name": "ftl0", 00:15:51.197 "aliases": [ 00:15:51.197 "ec114949-29a5-4e0c-89f3-eff10424c208" 00:15:51.197 ], 00:15:51.197 "product_name": "FTL disk", 00:15:51.197 "block_size": 4096, 00:15:51.197 "num_blocks": 20971520, 00:15:51.197 "uuid": "ec114949-29a5-4e0c-89f3-eff10424c208", 00:15:51.197 "assigned_rate_limits": { 00:15:51.197 "rw_ios_per_sec": 0, 00:15:51.197 "rw_mbytes_per_sec": 0, 00:15:51.197 "r_mbytes_per_sec": 0, 00:15:51.197 "w_mbytes_per_sec": 0 00:15:51.197 }, 00:15:51.197 "claimed": false, 00:15:51.197 "zoned": false, 00:15:51.197 "supported_io_types": { 00:15:51.197 "read": true, 00:15:51.197 "write": true, 00:15:51.197 "unmap": true, 00:15:51.197 "flush": true, 00:15:51.197 "reset": false, 00:15:51.197 "nvme_admin": false, 00:15:51.197 "nvme_io": false, 00:15:51.197 "nvme_io_md": false, 00:15:51.197 "write_zeroes": true, 00:15:51.197 "zcopy": false, 00:15:51.197 "get_zone_info": false, 00:15:51.197 "zone_management": false, 00:15:51.197 "zone_append": false, 00:15:51.197 "compare": false, 00:15:51.197 "compare_and_write": false, 00:15:51.197 "abort": false, 00:15:51.197 "seek_hole": false, 00:15:51.197 "seek_data": false, 00:15:51.197 "copy": false, 00:15:51.197 "nvme_iov_md": false 00:15:51.197 }, 00:15:51.197 "driver_specific": { 00:15:51.197 "ftl": { 00:15:51.197 "base_bdev": "93cffc70-bd3a-46c9-ab5b-3523dac32dee", 00:15:51.197 "cache": "nvc0n1p0" 00:15:51.197 } 00:15:51.197 } 00:15:51.197 } 00:15:51.197 ] 00:15:51.197 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:51.197 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:51.197 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:51.455 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:51.455 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:51.715 [2024-12-08 06:03:14.711470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.711530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:51.715 [2024-12-08 06:03:14.711555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:51.715 [2024-12-08 06:03:14.711568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.711618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:51.715 [2024-12-08 06:03:14.712072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.712099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:51.715 [2024-12-08 06:03:14.712131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:15:51.715 [2024-12-08 06:03:14.712144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.712642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.712691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:51.715 [2024-12-08 06:03:14.712705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:15:51.715 [2024-12-08 06:03:14.712728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.715986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.716040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:51.715 [2024-12-08 06:03:14.716073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:15:51.715 [2024-12-08 06:03:14.716090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.722570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.722619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:51.715 [2024-12-08 06:03:14.722654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.448 ms 00:15:51.715 [2024-12-08 06:03:14.722667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.724205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.724317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:51.715 [2024-12-08 06:03:14.724334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:15:51.715 [2024-12-08 06:03:14.724347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.728363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.728410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:51.715 [2024-12-08 06:03:14.728444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:15:51.715 [2024-12-08 06:03:14.728458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.728641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.728664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:51.715 [2024-12-08 06:03:14.728694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:15:51.715 [2024-12-08 06:03:14.728708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.730468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.730510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:51.715 [2024-12-08 06:03:14.730558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:15:51.715 [2024-12-08 06:03:14.730571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.731905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.731981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:51.715 [2024-12-08 06:03:14.731997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:15:51.715 [2024-12-08 06:03:14.732009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.715 [2024-12-08 06:03:14.733124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.715 [2024-12-08 06:03:14.733211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:51.716 [2024-12-08 06:03:14.733229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:15:51.716 [2024-12-08 06:03:14.733244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.716 [2024-12-08 06:03:14.734365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.716 [2024-12-08 06:03:14.734418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:51.716 [2024-12-08 06:03:14.734449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:15:51.716 [2024-12-08 06:03:14.734462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.716 [2024-12-08 06:03:14.734511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:51.716 [2024-12-08 06:03:14.734539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.734991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:51.716 [2024-12-08 06:03:14.735659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:51.717 [2024-12-08 06:03:14.735923] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:51.717 [2024-12-08 06:03:14.735934] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec114949-29a5-4e0c-89f3-eff10424c208 00:15:51.717 [2024-12-08 06:03:14.735948] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:51.717 [2024-12-08 06:03:14.735958] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:51.717 [2024-12-08 06:03:14.735973] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:51.717 [2024-12-08 06:03:14.735984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:51.717 [2024-12-08 06:03:14.735996] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:51.717 [2024-12-08 06:03:14.736007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:51.717 [2024-12-08 06:03:14.736018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:51.717 [2024-12-08 06:03:14.736028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:51.717 [2024-12-08 06:03:14.736039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:51.717 [2024-12-08 06:03:14.736050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.717 [2024-12-08 06:03:14.736063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:51.717 [2024-12-08 06:03:14.736075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:15:51.717 [2024-12-08 06:03:14.736087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.737873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.717 [2024-12-08 06:03:14.738041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:51.717 [2024-12-08 06:03:14.738099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:15:51.717 [2024-12-08 06:03:14.738234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.738390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.717 [2024-12-08 06:03:14.738448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:51.717 [2024-12-08 06:03:14.738562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:51.717 [2024-12-08 06:03:14.738617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.743998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.717 [2024-12-08 06:03:14.744220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:51.717 [2024-12-08 06:03:14.744347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.717 [2024-12-08 06:03:14.744511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.744627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.717 [2024-12-08 06:03:14.744740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:51.717 [2024-12-08 06:03:14.744847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.717 [2024-12-08 06:03:14.744902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.745112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.717 [2024-12-08 06:03:14.745273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:51.717 [2024-12-08 06:03:14.745391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.717 [2024-12-08 06:03:14.745510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.745592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.717 [2024-12-08 06:03:14.745665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:51.717 [2024-12-08 06:03:14.745775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.717 [2024-12-08 06:03:14.745832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.717 [2024-12-08 06:03:14.754960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.717 [2024-12-08 06:03:14.755252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:51.717 [2024-12-08 06:03:14.755382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.717 [2024-12-08 06:03:14.755440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.763435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.763635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:51.982 [2024-12-08 06:03:14.763764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.763820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.763962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.764096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:51.982 [2024-12-08 06:03:14.764156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.764298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.764408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.764442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:51.982 [2024-12-08 06:03:14.764458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.764489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.764608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.764639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:51.982 [2024-12-08 06:03:14.764654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.764671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.764739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.764762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:51.982 [2024-12-08 06:03:14.764777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.764790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.764846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.764869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:51.982 [2024-12-08 06:03:14.764881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.764897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.764960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.982 [2024-12-08 06:03:14.764982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:51.982 [2024-12-08 06:03:14.764994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.982 [2024-12-08 06:03:14.765007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.982 [2024-12-08 06:03:14.765386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.704 ms, result 0 00:15:51.982 true 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85061 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 85061 ']' 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 85061 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85061 00:15:51.982 killing process with pid 85061 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85061' 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 85061 00:15:51.982 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 85061 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:55.273 06:03:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:55.273 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:55.273 fio-3.35 00:15:55.273 Starting 1 thread 00:16:00.542 00:16:00.542 test: (groupid=0, jobs=1): err= 0: pid=85237: Sun Dec 8 06:03:22 2024 00:16:00.542 read: IOPS=936, BW=62.2MiB/s (65.2MB/s)(255MiB/4095msec) 00:16:00.542 slat (nsec): min=5592, max=32114, avg=7189.91, stdev=2568.18 00:16:00.542 clat (usec): min=349, max=961, avg=474.60, stdev=49.59 00:16:00.542 lat (usec): min=355, max=982, avg=481.79, stdev=50.30 00:16:00.542 clat percentiles (usec): 00:16:00.542 | 1.00th=[ 367], 5.00th=[ 404], 10.00th=[ 429], 20.00th=[ 437], 00:16:00.542 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:16:00.542 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 562], 00:16:00.542 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 668], 99.95th=[ 758], 00:16:00.542 | 99.99th=[ 963] 00:16:00.542 write: IOPS=942, BW=62.6MiB/s (65.6MB/s)(256MiB/4091msec); 0 zone resets 00:16:00.542 slat (nsec): min=18568, max=91468, avg=24478.13, stdev=5293.18 00:16:00.542 clat (usec): min=378, max=1778, avg=543.98, stdev=66.18 00:16:00.542 lat (usec): min=401, max=1801, avg=568.46, stdev=66.40 00:16:00.542 clat percentiles (usec): 00:16:00.542 | 1.00th=[ 429], 5.00th=[ 457], 10.00th=[ 469], 20.00th=[ 490], 00:16:00.542 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:16:00.542 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:16:00.542 | 99.00th=[ 791], 99.50th=[ 840], 99.90th=[ 955], 99.95th=[ 1029], 00:16:00.542 | 99.99th=[ 1778] 00:16:00.542 bw ( KiB/s): min=62978, max=65416, per=100.00%, avg=64227.25, stdev=789.91, samples=8 00:16:00.542 iops : min= 926, max= 962, avg=944.50, stdev=11.65, samples=8 00:16:00.542 lat (usec) : 500=48.56%, 750=50.75%, 1000=0.66% 00:16:00.542 lat (msec) : 2=0.03% 00:16:00.542 cpu : usr=98.97%, sys=0.27%, ctx=8, majf=0, minf=1326 00:16:00.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.542 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.542 00:16:00.542 Run status group 0 (all jobs): 00:16:00.542 READ: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=255MiB (267MB), run=4095-4095msec 00:16:00.542 WRITE: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=256MiB (269MB), run=4091-4091msec 00:16:00.542 ----------------------------------------------------- 00:16:00.543 Suppressions used: 00:16:00.543 count bytes template 00:16:00.543 1 5 /usr/src/fio/parse.c 00:16:00.543 1 8 libtcmalloc_minimal.so 00:16:00.543 1 904 libcrypto.so 00:16:00.543 ----------------------------------------------------- 00:16:00.543 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:00.543 06:03:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.800 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:00.800 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:00.800 fio-3.35 00:16:00.800 Starting 2 threads 00:16:32.895 00:16:32.895 first_half: (groupid=0, jobs=1): err= 0: pid=85335: Sun Dec 8 06:03:53 2024 00:16:32.895 read: IOPS=2234, BW=8937KiB/s (9152kB/s)(255MiB/29181msec) 00:16:32.895 slat (nsec): min=4248, max=43415, avg=7314.27, stdev=2470.44 00:16:32.895 clat (usec): min=973, max=352943, avg=44792.68, stdev=20593.56 00:16:32.895 lat (usec): min=984, max=352948, avg=44800.00, stdev=20593.73 00:16:32.895 clat percentiles (msec): 00:16:32.895 | 1.00th=[ 8], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:16:32.895 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:16:32.895 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 49], 95.00th=[ 63], 00:16:32.895 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 222], 99.95th=[ 300], 00:16:32.895 | 99.99th=[ 342] 00:16:32.895 write: IOPS=2921, BW=11.4MiB/s (12.0MB/s)(256MiB/22429msec); 0 zone resets 00:16:32.895 slat (usec): min=5, max=845, avg= 9.60, stdev= 8.15 00:16:32.895 clat (usec): min=473, max=133303, avg=12404.37, stdev=21745.83 00:16:32.895 lat (usec): min=482, max=133310, avg=12413.97, stdev=21746.05 00:16:32.895 clat percentiles (usec): 00:16:32.895 | 1.00th=[ 1012], 5.00th=[ 1319], 10.00th=[ 1483], 20.00th=[ 1827], 00:16:32.895 | 30.00th=[ 2671], 40.00th=[ 4686], 50.00th=[ 6063], 60.00th=[ 7111], 00:16:32.895 | 70.00th=[ 8225], 80.00th=[ 12911], 90.00th=[ 16909], 95.00th=[ 82314], 00:16:32.895 | 99.00th=[ 95945], 99.50th=[102237], 99.90th=[123208], 99.95th=[127402], 00:16:32.895 | 99.99th=[130548] 00:16:32.895 bw ( KiB/s): min= 936, max=40520, per=100.00%, avg=23831.27, stdev=11032.26, samples=22 00:16:32.895 iops : min= 234, max=10130, avg=5957.82, stdev=2758.07, samples=22 00:16:32.895 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.39% 00:16:32.895 lat (msec) : 2=11.73%, 4=6.27%, 10=19.77%, 20=8.15%, 50=45.74% 00:16:32.895 lat (msec) : 100=6.24%, 250=1.61%, 500=0.03% 00:16:32.895 cpu : usr=98.97%, sys=0.28%, ctx=57, majf=0, minf=5533 00:16:32.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:32.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.895 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.895 issued rwts: total=65200,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.895 second_half: (groupid=0, jobs=1): err= 0: pid=85336: Sun Dec 8 06:03:53 2024 00:16:32.895 read: IOPS=2219, BW=8879KiB/s (9093kB/s)(255MiB/29394msec) 00:16:32.895 slat (usec): min=4, max=595, avg= 7.41, stdev= 4.75 00:16:32.895 clat (usec): min=933, max=358124, avg=43841.84, stdev=23270.32 00:16:32.895 lat (usec): min=940, max=358131, avg=43849.26, stdev=23270.58 00:16:32.895 clat percentiles (msec): 00:16:32.895 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 39], 20.00th=[ 39], 00:16:32.895 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:16:32.895 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 55], 00:16:32.895 | 99.00th=[ 182], 99.50th=[ 199], 99.90th=[ 253], 99.95th=[ 279], 00:16:32.895 | 99.99th=[ 351] 00:16:32.895 write: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(256MiB/25014msec); 0 zone resets 00:16:32.895 slat (usec): min=5, max=1316, avg= 9.62, stdev= 8.29 00:16:32.895 clat (usec): min=487, max=133322, avg=13730.89, stdev=23108.68 00:16:32.895 lat (usec): min=503, max=133331, avg=13740.51, stdev=23108.80 00:16:32.895 clat percentiles (usec): 00:16:32.895 | 1.00th=[ 979], 5.00th=[ 1221], 10.00th=[ 1418], 20.00th=[ 1729], 00:16:32.895 | 30.00th=[ 2278], 40.00th=[ 3884], 50.00th=[ 5211], 60.00th=[ 6652], 00:16:32.895 | 70.00th=[ 8848], 80.00th=[ 14615], 90.00th=[ 43254], 95.00th=[ 83362], 00:16:32.895 | 99.00th=[ 96994], 99.50th=[103285], 99.90th=[124257], 99.95th=[128451], 00:16:32.895 | 99.99th=[131597] 00:16:32.895 bw ( KiB/s): min= 952, max=50448, per=92.65%, avg=19419.59, stdev=12876.82, samples=27 00:16:32.895 iops : min= 238, max=12612, avg=4854.85, stdev=3219.20, samples=27 00:16:32.895 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.57% 00:16:32.895 lat (msec) : 2=12.67%, 4=7.51%, 10=15.52%, 20=9.34%, 50=47.24% 00:16:32.895 lat (msec) : 100=5.41%, 250=1.67%, 500=0.06% 00:16:32.895 cpu : usr=98.32%, sys=0.48%, ctx=85, majf=0, minf=5591 00:16:32.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:32.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.895 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.895 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.895 00:16:32.895 Run status group 0 (all jobs): 00:16:32.895 READ: bw=17.3MiB/s (18.2MB/s), 8879KiB/s-8937KiB/s (9093kB/s-9152kB/s), io=510MiB (534MB), run=29181-29394msec 00:16:32.895 WRITE: bw=20.5MiB/s (21.5MB/s), 10.2MiB/s-11.4MiB/s (10.7MB/s-12.0MB/s), io=512MiB (537MB), run=22429-25014msec 00:16:32.895 ----------------------------------------------------- 00:16:32.895 Suppressions used: 00:16:32.895 count bytes template 00:16:32.895 2 10 /usr/src/fio/parse.c 00:16:32.895 1 96 /usr/src/fio/iolog.c 00:16:32.895 1 8 libtcmalloc_minimal.so 00:16:32.895 1 904 libcrypto.so 00:16:32.895 ----------------------------------------------------- 00:16:32.895 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:32.895 06:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:32.895 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:32.895 fio-3.35 00:16:32.895 Starting 1 thread 00:16:50.976 00:16:50.976 test: (groupid=0, jobs=1): err= 0: pid=85692: Sun Dec 8 06:04:11 2024 00:16:50.976 read: IOPS=6202, BW=24.2MiB/s (25.4MB/s)(255MiB/10513msec) 00:16:50.976 slat (nsec): min=4247, max=56574, avg=6533.16, stdev=2313.09 00:16:50.976 clat (usec): min=868, max=40578, avg=20626.59, stdev=976.61 00:16:50.976 lat (usec): min=873, max=40585, avg=20633.12, stdev=976.59 00:16:50.976 clat percentiles (usec): 00:16:50.976 | 1.00th=[19268], 5.00th=[19530], 10.00th=[19792], 20.00th=[20055], 00:16:50.976 | 30.00th=[20317], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:16:50.976 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21365], 95.00th=[21890], 00:16:50.976 | 99.00th=[22938], 99.50th=[23200], 99.90th=[30016], 99.95th=[35390], 00:16:50.976 | 99.99th=[39584] 00:16:50.976 write: IOPS=11.5k, BW=45.1MiB/s (47.2MB/s)(256MiB/5682msec); 0 zone resets 00:16:50.976 slat (usec): min=5, max=143, avg= 9.53, stdev= 5.71 00:16:50.976 clat (usec): min=654, max=67209, avg=11038.34, stdev=14132.98 00:16:50.976 lat (usec): min=662, max=67216, avg=11047.87, stdev=14133.04 00:16:50.976 clat percentiles (usec): 00:16:50.976 | 1.00th=[ 971], 5.00th=[ 1172], 10.00th=[ 1303], 20.00th=[ 1483], 00:16:50.976 | 30.00th=[ 1696], 40.00th=[ 2245], 50.00th=[ 7111], 60.00th=[ 8094], 00:16:50.976 | 70.00th=[ 9241], 80.00th=[11207], 90.00th=[40109], 95.00th=[43779], 00:16:50.976 | 99.00th=[49021], 99.50th=[52691], 99.90th=[61080], 99.95th=[62653], 00:16:50.976 | 99.99th=[66323] 00:16:50.976 bw ( KiB/s): min=12928, max=62968, per=94.70%, avg=43690.67, stdev=13312.88, samples=12 00:16:50.976 iops : min= 3232, max=15742, avg=10922.67, stdev=3328.22, samples=12 00:16:50.976 lat (usec) : 750=0.01%, 1000=0.66% 00:16:50.976 lat (msec) : 2=18.24%, 4=2.06%, 10=16.65%, 20=12.89%, 50=49.07% 00:16:50.976 lat (msec) : 100=0.43% 00:16:50.976 cpu : usr=98.43%, sys=0.77%, ctx=49, majf=0, minf=5577 00:16:50.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:50.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.976 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.976 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.976 00:16:50.976 Run status group 0 (all jobs): 00:16:50.976 READ: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=255MiB (267MB), run=10513-10513msec 00:16:50.976 WRITE: bw=45.1MiB/s (47.2MB/s), 45.1MiB/s-45.1MiB/s (47.2MB/s-47.2MB/s), io=256MiB (268MB), run=5682-5682msec 00:16:50.976 ----------------------------------------------------- 00:16:50.976 Suppressions used: 00:16:50.976 count bytes template 00:16:50.976 1 5 /usr/src/fio/parse.c 00:16:50.976 2 192 /usr/src/fio/iolog.c 00:16:50.976 1 8 libtcmalloc_minimal.so 00:16:50.976 1 904 libcrypto.so 00:16:50.976 ----------------------------------------------------- 00:16:50.976 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:50.976 Remove shared memory files 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70349 /dev/shm/spdk_tgt_trace.pid84006 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:50.976 ************************************ 00:16:50.976 END TEST ftl_fio_basic 00:16:50.976 ************************************ 00:16:50.976 00:16:50.976 real 1m7.184s 00:16:50.976 user 2m34.912s 00:16:50.976 sys 0m3.605s 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.976 06:04:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 06:04:12 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:50.976 06:04:12 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:50.976 06:04:12 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.976 06:04:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:50.977 ************************************ 00:16:50.977 START TEST ftl_bdevperf 00:16:50.977 ************************************ 00:16:50.977 06:04:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:50.977 * Looking for test storage... 00:16:50.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.977 06:04:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:50.977 06:04:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:50.977 06:04:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:50.977 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85953 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85953 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85953 ']' 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.978 [2024-12-08 06:04:13.176125] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:50.978 [2024-12-08 06:04:13.176321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85953 ] 00:16:50.978 [2024-12-08 06:04:13.322809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.978 [2024-12-08 06:04:13.376375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:50.978 06:04:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:51.237 { 00:16:51.237 "name": "nvme0n1", 00:16:51.237 "aliases": [ 00:16:51.237 "a73f8cc7-9f43-4d71-b9c0-b91db6dfa8da" 00:16:51.237 ], 00:16:51.237 "product_name": "NVMe disk", 00:16:51.237 "block_size": 4096, 00:16:51.237 "num_blocks": 1310720, 00:16:51.237 "uuid": "a73f8cc7-9f43-4d71-b9c0-b91db6dfa8da", 00:16:51.237 "numa_id": -1, 00:16:51.237 "assigned_rate_limits": { 00:16:51.237 "rw_ios_per_sec": 0, 00:16:51.237 "rw_mbytes_per_sec": 0, 00:16:51.237 "r_mbytes_per_sec": 0, 00:16:51.237 "w_mbytes_per_sec": 0 00:16:51.237 }, 00:16:51.237 "claimed": true, 00:16:51.237 "claim_type": "read_many_write_one", 00:16:51.237 "zoned": false, 00:16:51.237 "supported_io_types": { 00:16:51.237 "read": true, 00:16:51.237 "write": true, 00:16:51.237 "unmap": true, 00:16:51.237 "flush": true, 00:16:51.237 "reset": true, 00:16:51.237 "nvme_admin": true, 00:16:51.237 "nvme_io": true, 00:16:51.237 "nvme_io_md": false, 00:16:51.237 "write_zeroes": true, 00:16:51.237 "zcopy": false, 00:16:51.237 "get_zone_info": false, 00:16:51.237 "zone_management": false, 00:16:51.237 "zone_append": false, 00:16:51.237 "compare": true, 00:16:51.237 "compare_and_write": false, 00:16:51.237 "abort": true, 00:16:51.237 "seek_hole": false, 00:16:51.237 "seek_data": false, 00:16:51.237 "copy": true, 00:16:51.237 "nvme_iov_md": false 00:16:51.237 }, 00:16:51.237 "driver_specific": { 00:16:51.237 "nvme": [ 00:16:51.237 { 00:16:51.237 "pci_address": "0000:00:11.0", 00:16:51.237 "trid": { 00:16:51.237 "trtype": "PCIe", 00:16:51.237 "traddr": "0000:00:11.0" 00:16:51.237 }, 00:16:51.237 "ctrlr_data": { 00:16:51.237 "cntlid": 0, 00:16:51.237 "vendor_id": "0x1b36", 00:16:51.237 "model_number": "QEMU NVMe Ctrl", 00:16:51.237 "serial_number": "12341", 00:16:51.237 "firmware_revision": "8.0.0", 00:16:51.237 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:51.237 "oacs": { 00:16:51.237 "security": 0, 00:16:51.237 "format": 1, 00:16:51.237 "firmware": 0, 00:16:51.237 "ns_manage": 1 00:16:51.237 }, 00:16:51.237 "multi_ctrlr": false, 00:16:51.237 "ana_reporting": false 00:16:51.237 }, 00:16:51.237 "vs": { 00:16:51.237 "nvme_version": "1.4" 00:16:51.237 }, 00:16:51.237 "ns_data": { 00:16:51.237 "id": 1, 00:16:51.237 "can_share": false 00:16:51.237 } 00:16:51.237 } 00:16:51.237 ], 00:16:51.237 "mp_policy": "active_passive" 00:16:51.237 } 00:16:51.237 } 00:16:51.237 ]' 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:51.237 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:51.496 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=eef32056-5049-4d44-9d40-78044cb79297 00:16:51.496 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:51.496 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eef32056-5049-4d44-9d40-78044cb79297 00:16:51.753 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:52.011 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=03fc5077-0abf-46a5-a58b-ceb417aa875b 00:16:52.011 06:04:14 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 03fc5077-0abf-46a5-a58b-ceb417aa875b 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=579ecc57-221d-43be-8620-a02278e07755 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 579ecc57-221d-43be-8620-a02278e07755 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=579ecc57-221d-43be-8620-a02278e07755 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 579ecc57-221d-43be-8620-a02278e07755 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=579ecc57-221d-43be-8620-a02278e07755 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:52.267 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 579ecc57-221d-43be-8620-a02278e07755 00:16:52.524 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:52.524 { 00:16:52.524 "name": "579ecc57-221d-43be-8620-a02278e07755", 00:16:52.524 "aliases": [ 00:16:52.524 "lvs/nvme0n1p0" 00:16:52.524 ], 00:16:52.524 "product_name": "Logical Volume", 00:16:52.524 "block_size": 4096, 00:16:52.524 "num_blocks": 26476544, 00:16:52.524 "uuid": "579ecc57-221d-43be-8620-a02278e07755", 00:16:52.524 "assigned_rate_limits": { 00:16:52.524 "rw_ios_per_sec": 0, 00:16:52.524 "rw_mbytes_per_sec": 0, 00:16:52.524 "r_mbytes_per_sec": 0, 00:16:52.524 "w_mbytes_per_sec": 0 00:16:52.524 }, 00:16:52.524 "claimed": false, 00:16:52.524 "zoned": false, 00:16:52.524 "supported_io_types": { 00:16:52.524 "read": true, 00:16:52.524 "write": true, 00:16:52.524 "unmap": true, 00:16:52.524 "flush": false, 00:16:52.524 "reset": true, 00:16:52.524 "nvme_admin": false, 00:16:52.524 "nvme_io": false, 00:16:52.524 "nvme_io_md": false, 00:16:52.524 "write_zeroes": true, 00:16:52.524 "zcopy": false, 00:16:52.524 "get_zone_info": false, 00:16:52.524 "zone_management": false, 00:16:52.524 "zone_append": false, 00:16:52.524 "compare": false, 00:16:52.524 "compare_and_write": false, 00:16:52.524 "abort": false, 00:16:52.524 "seek_hole": true, 00:16:52.524 "seek_data": true, 00:16:52.524 "copy": false, 00:16:52.524 "nvme_iov_md": false 00:16:52.524 }, 00:16:52.524 "driver_specific": { 00:16:52.524 "lvol": { 00:16:52.524 "lvol_store_uuid": "03fc5077-0abf-46a5-a58b-ceb417aa875b", 00:16:52.524 "base_bdev": "nvme0n1", 00:16:52.524 "thin_provision": true, 00:16:52.524 "num_allocated_clusters": 0, 00:16:52.524 "snapshot": false, 00:16:52.524 "clone": false, 00:16:52.524 "esnap_clone": false 00:16:52.524 } 00:16:52.524 } 00:16:52.524 } 00:16:52.524 ]' 00:16:52.524 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:52.524 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:52.524 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:52.781 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:52.781 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:52.781 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:52.781 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:52.781 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:52.781 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 579ecc57-221d-43be-8620-a02278e07755 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=579ecc57-221d-43be-8620-a02278e07755 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:53.038 06:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 579ecc57-221d-43be-8620-a02278e07755 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.295 { 00:16:53.295 "name": "579ecc57-221d-43be-8620-a02278e07755", 00:16:53.295 "aliases": [ 00:16:53.295 "lvs/nvme0n1p0" 00:16:53.295 ], 00:16:53.295 "product_name": "Logical Volume", 00:16:53.295 "block_size": 4096, 00:16:53.295 "num_blocks": 26476544, 00:16:53.295 "uuid": "579ecc57-221d-43be-8620-a02278e07755", 00:16:53.295 "assigned_rate_limits": { 00:16:53.295 "rw_ios_per_sec": 0, 00:16:53.295 "rw_mbytes_per_sec": 0, 00:16:53.295 "r_mbytes_per_sec": 0, 00:16:53.295 "w_mbytes_per_sec": 0 00:16:53.295 }, 00:16:53.295 "claimed": false, 00:16:53.295 "zoned": false, 00:16:53.295 "supported_io_types": { 00:16:53.295 "read": true, 00:16:53.295 "write": true, 00:16:53.295 "unmap": true, 00:16:53.295 "flush": false, 00:16:53.295 "reset": true, 00:16:53.295 "nvme_admin": false, 00:16:53.295 "nvme_io": false, 00:16:53.295 "nvme_io_md": false, 00:16:53.295 "write_zeroes": true, 00:16:53.295 "zcopy": false, 00:16:53.295 "get_zone_info": false, 00:16:53.295 "zone_management": false, 00:16:53.295 "zone_append": false, 00:16:53.295 "compare": false, 00:16:53.295 "compare_and_write": false, 00:16:53.295 "abort": false, 00:16:53.295 "seek_hole": true, 00:16:53.295 "seek_data": true, 00:16:53.295 "copy": false, 00:16:53.295 "nvme_iov_md": false 00:16:53.295 }, 00:16:53.295 "driver_specific": { 00:16:53.295 "lvol": { 00:16:53.295 "lvol_store_uuid": "03fc5077-0abf-46a5-a58b-ceb417aa875b", 00:16:53.295 "base_bdev": "nvme0n1", 00:16:53.295 "thin_provision": true, 00:16:53.295 "num_allocated_clusters": 0, 00:16:53.295 "snapshot": false, 00:16:53.295 "clone": false, 00:16:53.295 "esnap_clone": false 00:16:53.295 } 00:16:53.295 } 00:16:53.295 } 00:16:53.295 ]' 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:53.295 06:04:16 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 579ecc57-221d-43be-8620-a02278e07755 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=579ecc57-221d-43be-8620-a02278e07755 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:53.552 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 579ecc57-221d-43be-8620-a02278e07755 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.809 { 00:16:53.809 "name": "579ecc57-221d-43be-8620-a02278e07755", 00:16:53.809 "aliases": [ 00:16:53.809 "lvs/nvme0n1p0" 00:16:53.809 ], 00:16:53.809 "product_name": "Logical Volume", 00:16:53.809 "block_size": 4096, 00:16:53.809 "num_blocks": 26476544, 00:16:53.809 "uuid": "579ecc57-221d-43be-8620-a02278e07755", 00:16:53.809 "assigned_rate_limits": { 00:16:53.809 "rw_ios_per_sec": 0, 00:16:53.809 "rw_mbytes_per_sec": 0, 00:16:53.809 "r_mbytes_per_sec": 0, 00:16:53.809 "w_mbytes_per_sec": 0 00:16:53.809 }, 00:16:53.809 "claimed": false, 00:16:53.809 "zoned": false, 00:16:53.809 "supported_io_types": { 00:16:53.809 "read": true, 00:16:53.809 "write": true, 00:16:53.809 "unmap": true, 00:16:53.809 "flush": false, 00:16:53.809 "reset": true, 00:16:53.809 "nvme_admin": false, 00:16:53.809 "nvme_io": false, 00:16:53.809 "nvme_io_md": false, 00:16:53.809 "write_zeroes": true, 00:16:53.809 "zcopy": false, 00:16:53.809 "get_zone_info": false, 00:16:53.809 "zone_management": false, 00:16:53.809 "zone_append": false, 00:16:53.809 "compare": false, 00:16:53.809 "compare_and_write": false, 00:16:53.809 "abort": false, 00:16:53.809 "seek_hole": true, 00:16:53.809 "seek_data": true, 00:16:53.809 "copy": false, 00:16:53.809 "nvme_iov_md": false 00:16:53.809 }, 00:16:53.809 "driver_specific": { 00:16:53.809 "lvol": { 00:16:53.809 "lvol_store_uuid": "03fc5077-0abf-46a5-a58b-ceb417aa875b", 00:16:53.809 "base_bdev": "nvme0n1", 00:16:53.809 "thin_provision": true, 00:16:53.809 "num_allocated_clusters": 0, 00:16:53.809 "snapshot": false, 00:16:53.809 "clone": false, 00:16:53.809 "esnap_clone": false 00:16:53.809 } 00:16:53.809 } 00:16:53.809 } 00:16:53.809 ]' 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:53.809 06:04:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 579ecc57-221d-43be-8620-a02278e07755 -c nvc0n1p0 --l2p_dram_limit 20 00:16:54.069 [2024-12-08 06:04:17.047383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.047475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:54.069 [2024-12-08 06:04:17.047499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.069 [2024-12-08 06:04:17.047513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.047596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.047614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.069 [2024-12-08 06:04:17.047630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:54.069 [2024-12-08 06:04:17.047642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.047670] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:54.069 [2024-12-08 06:04:17.048039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:54.069 [2024-12-08 06:04:17.048066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.048078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.069 [2024-12-08 06:04:17.048092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:16:54.069 [2024-12-08 06:04:17.048102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.048271] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fbb8d458-10dc-4e9e-b206-47c752b166b5 00:16:54.069 [2024-12-08 06:04:17.049165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.049204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:54.069 [2024-12-08 06:04:17.049219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:54.069 [2024-12-08 06:04:17.049231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.053617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.053674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.069 [2024-12-08 06:04:17.053689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.345 ms 00:16:54.069 [2024-12-08 06:04:17.053717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.053800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.053821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.069 [2024-12-08 06:04:17.053838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:54.069 [2024-12-08 06:04:17.053851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.053937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.053976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:54.069 [2024-12-08 06:04:17.053996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:54.069 [2024-12-08 06:04:17.054010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.054038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.069 [2024-12-08 06:04:17.055551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.055593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.069 [2024-12-08 06:04:17.055612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:16:54.069 [2024-12-08 06:04:17.055624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.055667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.055684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:54.069 [2024-12-08 06:04:17.055702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:54.069 [2024-12-08 06:04:17.055714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.055737] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:54.069 [2024-12-08 06:04:17.055929] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:54.069 [2024-12-08 06:04:17.055953] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:54.069 [2024-12-08 06:04:17.055968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:54.069 [2024-12-08 06:04:17.055984] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:54.069 [2024-12-08 06:04:17.055998] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056013] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:54.069 [2024-12-08 06:04:17.056024] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:54.069 [2024-12-08 06:04:17.056036] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:54.069 [2024-12-08 06:04:17.056046] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:54.069 [2024-12-08 06:04:17.056067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.056078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:54.069 [2024-12-08 06:04:17.056095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:16:54.069 [2024-12-08 06:04:17.056105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.056193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.069 [2024-12-08 06:04:17.056206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:54.069 [2024-12-08 06:04:17.056220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:54.069 [2024-12-08 06:04:17.056248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.069 [2024-12-08 06:04:17.056372] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:54.069 [2024-12-08 06:04:17.056391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:54.069 [2024-12-08 06:04:17.056404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:54.069 [2024-12-08 06:04:17.056441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:54.069 [2024-12-08 06:04:17.056478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.069 [2024-12-08 06:04:17.056499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:54.069 [2024-12-08 06:04:17.056510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:54.069 [2024-12-08 06:04:17.056524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.069 [2024-12-08 06:04:17.056534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:54.069 [2024-12-08 06:04:17.056546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:54.069 [2024-12-08 06:04:17.056555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:54.069 [2024-12-08 06:04:17.056577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:54.069 [2024-12-08 06:04:17.056612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:54.069 [2024-12-08 06:04:17.056644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:54.069 [2024-12-08 06:04:17.056677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:54.069 [2024-12-08 06:04:17.056710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:54.069 [2024-12-08 06:04:17.056732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:54.069 [2024-12-08 06:04:17.056745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:54.069 [2024-12-08 06:04:17.056755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.069 [2024-12-08 06:04:17.056766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:54.070 [2024-12-08 06:04:17.056776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:54.070 [2024-12-08 06:04:17.056787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.070 [2024-12-08 06:04:17.056797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:54.070 [2024-12-08 06:04:17.056809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:54.070 [2024-12-08 06:04:17.056818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.070 [2024-12-08 06:04:17.056830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:54.070 [2024-12-08 06:04:17.056840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:54.070 [2024-12-08 06:04:17.056851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.070 [2024-12-08 06:04:17.056861] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:54.070 [2024-12-08 06:04:17.056875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:54.070 [2024-12-08 06:04:17.056886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.070 [2024-12-08 06:04:17.056898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.070 [2024-12-08 06:04:17.056910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:54.070 [2024-12-08 06:04:17.056922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:54.070 [2024-12-08 06:04:17.056932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:54.070 [2024-12-08 06:04:17.056943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:54.070 [2024-12-08 06:04:17.056953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:54.070 [2024-12-08 06:04:17.056966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:54.070 [2024-12-08 06:04:17.056980] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:54.070 [2024-12-08 06:04:17.056996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:54.070 [2024-12-08 06:04:17.057021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:54.070 [2024-12-08 06:04:17.057031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:54.070 [2024-12-08 06:04:17.057045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:54.070 [2024-12-08 06:04:17.057057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:54.070 [2024-12-08 06:04:17.057071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:54.070 [2024-12-08 06:04:17.057082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:54.070 [2024-12-08 06:04:17.057105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:54.070 [2024-12-08 06:04:17.057116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:54.070 [2024-12-08 06:04:17.057128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:54.070 [2024-12-08 06:04:17.057200] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:54.070 [2024-12-08 06:04:17.057215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:54.070 [2024-12-08 06:04:17.057248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:54.070 [2024-12-08 06:04:17.057259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:54.070 [2024-12-08 06:04:17.057273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:54.070 [2024-12-08 06:04:17.057285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.070 [2024-12-08 06:04:17.057300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:54.070 [2024-12-08 06:04:17.057314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:16:54.070 [2024-12-08 06:04:17.057329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.070 [2024-12-08 06:04:17.057382] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:54.070 [2024-12-08 06:04:17.057402] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:56.600 [2024-12-08 06:04:19.212197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.212294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:56.600 [2024-12-08 06:04:19.212315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2154.827 ms 00:16:56.600 [2024-12-08 06:04:19.212340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.231269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.231367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.600 [2024-12-08 06:04:19.231392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.834 ms 00:16:56.600 [2024-12-08 06:04:19.231440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.231642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.231681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.600 [2024-12-08 06:04:19.231700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:56.600 [2024-12-08 06:04:19.231718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.241086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.241145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.600 [2024-12-08 06:04:19.241161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.265 ms 00:16:56.600 [2024-12-08 06:04:19.241184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.241266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.241286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.600 [2024-12-08 06:04:19.241299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:56.600 [2024-12-08 06:04:19.241321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.241745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.241775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.600 [2024-12-08 06:04:19.241789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:16:56.600 [2024-12-08 06:04:19.241805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.241944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.241966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.600 [2024-12-08 06:04:19.241999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:56.600 [2024-12-08 06:04:19.242012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.246638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.246675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.600 [2024-12-08 06:04:19.246721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:16:56.600 [2024-12-08 06:04:19.246734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.254805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:56.600 [2024-12-08 06:04:19.259520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.259569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.600 [2024-12-08 06:04:19.259602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.705 ms 00:16:56.600 [2024-12-08 06:04:19.259614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.304438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.304515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:56.600 [2024-12-08 06:04:19.304539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.786 ms 00:16:56.600 [2024-12-08 06:04:19.304551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.304756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.304774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.600 [2024-12-08 06:04:19.304792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:56.600 [2024-12-08 06:04:19.304803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.308203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.308265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:56.600 [2024-12-08 06:04:19.308283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:16:56.600 [2024-12-08 06:04:19.308295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.311158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.311233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:56.600 [2024-12-08 06:04:19.311251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:16:56.600 [2024-12-08 06:04:19.311262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.311642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.600 [2024-12-08 06:04:19.311685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.600 [2024-12-08 06:04:19.311707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:16:56.600 [2024-12-08 06:04:19.311719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.600 [2024-12-08 06:04:19.335514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.601 [2024-12-08 06:04:19.335566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:56.601 [2024-12-08 06:04:19.335600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:16:56.601 [2024-12-08 06:04:19.335613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.601 [2024-12-08 06:04:19.339934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.601 [2024-12-08 06:04:19.339982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:56.601 [2024-12-08 06:04:19.340003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:16:56.601 [2024-12-08 06:04:19.340014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.601 [2024-12-08 06:04:19.343530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.601 [2024-12-08 06:04:19.343566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:56.601 [2024-12-08 06:04:19.343585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.463 ms 00:16:56.601 [2024-12-08 06:04:19.343596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.601 [2024-12-08 06:04:19.347235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.601 [2024-12-08 06:04:19.347294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.601 [2024-12-08 06:04:19.347315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:16:56.601 [2024-12-08 06:04:19.347325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.601 [2024-12-08 06:04:19.347377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.601 [2024-12-08 06:04:19.347395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.601 [2024-12-08 06:04:19.347454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:56.601 [2024-12-08 06:04:19.347469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.601 [2024-12-08 06:04:19.347575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.601 [2024-12-08 06:04:19.347600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.601 [2024-12-08 06:04:19.347617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:56.601 [2024-12-08 06:04:19.347650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.601 [2024-12-08 06:04:19.348838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2300.919 ms, result 0 00:16:56.601 { 00:16:56.601 "name": "ftl0", 00:16:56.601 "uuid": "fbb8d458-10dc-4e9e-b206-47c752b166b5" 00:16:56.601 } 00:16:56.601 06:04:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:56.601 06:04:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:56.601 06:04:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:56.859 06:04:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:56.859 [2024-12-08 06:04:19.776327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:56.859 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:56.859 Zero copy mechanism will not be used. 00:16:56.859 Running I/O for 4 seconds... 00:16:58.742 1704.00 IOPS, 113.16 MiB/s [2024-12-08T06:04:23.165Z] 1716.50 IOPS, 113.99 MiB/s [2024-12-08T06:04:24.102Z] 1728.33 IOPS, 114.77 MiB/s [2024-12-08T06:04:24.102Z] 1721.25 IOPS, 114.30 MiB/s 00:17:01.057 Latency(us) 00:17:01.057 [2024-12-08T06:04:24.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.057 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:01.057 ftl0 : 4.00 1720.72 114.27 0.00 0.00 609.84 243.90 2606.55 00:17:01.057 [2024-12-08T06:04:24.102Z] =================================================================================================================== 00:17:01.057 [2024-12-08T06:04:24.102Z] Total : 1720.72 114.27 0.00 0.00 609.84 243.90 2606.55 00:17:01.057 [2024-12-08 06:04:23.785134] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:01.057 { 00:17:01.057 "results": [ 00:17:01.057 { 00:17:01.057 "job": "ftl0", 00:17:01.057 "core_mask": "0x1", 00:17:01.057 "workload": "randwrite", 00:17:01.057 "status": "finished", 00:17:01.057 "queue_depth": 1, 00:17:01.057 "io_size": 69632, 00:17:01.057 "runtime": 4.002395, 00:17:01.057 "iops": 1720.719719068208, 00:17:01.057 "mibps": 114.26654384437319, 00:17:01.057 "io_failed": 0, 00:17:01.057 "io_timeout": 0, 00:17:01.057 "avg_latency_us": 609.8412893857993, 00:17:01.057 "min_latency_us": 243.89818181818183, 00:17:01.057 "max_latency_us": 2606.5454545454545 00:17:01.057 } 00:17:01.057 ], 00:17:01.057 "core_count": 1 00:17:01.057 } 00:17:01.057 06:04:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:01.057 [2024-12-08 06:04:23.934561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:01.057 Running I/O for 4 seconds... 00:17:02.941 7794.00 IOPS, 30.45 MiB/s [2024-12-08T06:04:27.376Z] 7309.50 IOPS, 28.55 MiB/s [2024-12-08T06:04:27.942Z] 7172.67 IOPS, 28.02 MiB/s [2024-12-08T06:04:28.200Z] 7189.25 IOPS, 28.08 MiB/s 00:17:05.155 Latency(us) 00:17:05.155 [2024-12-08T06:04:28.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.155 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.155 ftl0 : 4.02 7184.69 28.07 0.00 0.00 17768.78 336.99 42657.98 00:17:05.155 [2024-12-08T06:04:28.200Z] =================================================================================================================== 00:17:05.155 [2024-12-08T06:04:28.200Z] Total : 7184.69 28.07 0.00 0.00 17768.78 0.00 42657.98 00:17:05.155 { 00:17:05.155 "results": [ 00:17:05.155 { 00:17:05.155 "job": "ftl0", 00:17:05.155 "core_mask": "0x1", 00:17:05.155 "workload": "randwrite", 00:17:05.155 "status": "finished", 00:17:05.155 "queue_depth": 128, 00:17:05.155 "io_size": 4096, 00:17:05.155 "runtime": 4.019935, 00:17:05.155 "iops": 7184.69328484167, 00:17:05.155 "mibps": 28.065208143912773, 00:17:05.155 "io_failed": 0, 00:17:05.155 "io_timeout": 0, 00:17:05.155 "avg_latency_us": 17768.77777942852, 00:17:05.155 "min_latency_us": 336.9890909090909, 00:17:05.155 "max_latency_us": 42657.97818181818 00:17:05.155 } 00:17:05.155 ], 00:17:05.155 "core_count": 1 00:17:05.155 } 00:17:05.155 [2024-12-08 06:04:27.961235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:05.155 06:04:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:05.155 [2024-12-08 06:04:28.087135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:05.155 Running I/O for 4 seconds... 00:17:07.464 5565.00 IOPS, 21.74 MiB/s [2024-12-08T06:04:31.445Z] 5606.00 IOPS, 21.90 MiB/s [2024-12-08T06:04:32.381Z] 5613.67 IOPS, 21.93 MiB/s [2024-12-08T06:04:32.381Z] 5610.75 IOPS, 21.92 MiB/s 00:17:09.336 Latency(us) 00:17:09.336 [2024-12-08T06:04:32.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.336 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:09.336 Verification LBA range: start 0x0 length 0x1400000 00:17:09.336 ftl0 : 4.01 5621.99 21.96 0.00 0.00 22684.46 374.23 28955.00 00:17:09.336 [2024-12-08T06:04:32.381Z] =================================================================================================================== 00:17:09.336 [2024-12-08T06:04:32.381Z] Total : 5621.99 21.96 0.00 0.00 22684.46 0.00 28955.00 00:17:09.336 [2024-12-08 06:04:32.109422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:09.336 { 00:17:09.336 "results": [ 00:17:09.336 { 00:17:09.336 "job": "ftl0", 00:17:09.336 "core_mask": "0x1", 00:17:09.336 "workload": "verify", 00:17:09.336 "status": "finished", 00:17:09.336 "verify_range": { 00:17:09.336 "start": 0, 00:17:09.336 "length": 20971520 00:17:09.336 }, 00:17:09.336 "queue_depth": 128, 00:17:09.336 "io_size": 4096, 00:17:09.336 "runtime": 4.014774, 00:17:09.336 "iops": 5621.985197672397, 00:17:09.336 "mibps": 21.9608796784078, 00:17:09.336 "io_failed": 0, 00:17:09.336 "io_timeout": 0, 00:17:09.336 "avg_latency_us": 22684.459005723354, 00:17:09.336 "min_latency_us": 374.22545454545457, 00:17:09.336 "max_latency_us": 28954.996363636365 00:17:09.336 } 00:17:09.336 ], 00:17:09.336 "core_count": 1 00:17:09.336 } 00:17:09.336 06:04:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:09.596 [2024-12-08 06:04:32.409904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.409968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.596 [2024-12-08 06:04:32.409989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:09.596 [2024-12-08 06:04:32.410000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.410033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.596 [2024-12-08 06:04:32.410432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.410455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.596 [2024-12-08 06:04:32.410467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:09.596 [2024-12-08 06:04:32.410483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.412083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.412160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.596 [2024-12-08 06:04:32.412191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:17:09.596 [2024-12-08 06:04:32.412206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.589308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.589387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.596 [2024-12-08 06:04:32.589406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.065 ms 00:17:09.596 [2024-12-08 06:04:32.589420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.595202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.595256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.596 [2024-12-08 06:04:32.595270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.741 ms 00:17:09.596 [2024-12-08 06:04:32.595283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.596894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.596948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.596 [2024-12-08 06:04:32.596962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:17:09.596 [2024-12-08 06:04:32.596974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.601057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.601145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.596 [2024-12-08 06:04:32.601160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:17:09.596 [2024-12-08 06:04:32.601178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.601302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.601368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.596 [2024-12-08 06:04:32.601382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:09.596 [2024-12-08 06:04:32.601403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.603176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.603291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.596 [2024-12-08 06:04:32.603307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:17:09.596 [2024-12-08 06:04:32.603320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.604674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.604736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.596 [2024-12-08 06:04:32.604751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:17:09.596 [2024-12-08 06:04:32.604778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.605885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.605938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.596 [2024-12-08 06:04:32.605951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:17:09.596 [2024-12-08 06:04:32.605966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.607014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.596 [2024-12-08 06:04:32.607099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.596 [2024-12-08 06:04:32.607113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:17:09.596 [2024-12-08 06:04:32.607124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.596 [2024-12-08 06:04:32.607158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.596 [2024-12-08 06:04:32.607219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.596 [2024-12-08 06:04:32.607723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.607993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.597 [2024-12-08 06:04:32.608587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.597 [2024-12-08 06:04:32.608598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fbb8d458-10dc-4e9e-b206-47c752b166b5 00:17:09.597 [2024-12-08 06:04:32.608611] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.597 [2024-12-08 06:04:32.608623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.597 [2024-12-08 06:04:32.608635] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.597 [2024-12-08 06:04:32.608645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.597 [2024-12-08 06:04:32.608658] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.597 [2024-12-08 06:04:32.608669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.597 [2024-12-08 06:04:32.608693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.597 [2024-12-08 06:04:32.608703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.597 [2024-12-08 06:04:32.608714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.597 [2024-12-08 06:04:32.608725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.597 [2024-12-08 06:04:32.608738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.597 [2024-12-08 06:04:32.608749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:17:09.597 [2024-12-08 06:04:32.608761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.597 [2024-12-08 06:04:32.609965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.597 [2024-12-08 06:04:32.609990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.597 [2024-12-08 06:04:32.610004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:17:09.597 [2024-12-08 06:04:32.610016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.597 [2024-12-08 06:04:32.610122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.597 [2024-12-08 06:04:32.610142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.597 [2024-12-08 06:04:32.610155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:09.597 [2024-12-08 06:04:32.610169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.597 [2024-12-08 06:04:32.614652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.597 [2024-12-08 06:04:32.614697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.597 [2024-12-08 06:04:32.614713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.597 [2024-12-08 06:04:32.614741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.597 [2024-12-08 06:04:32.614811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.597 [2024-12-08 06:04:32.614838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.598 [2024-12-08 06:04:32.614849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.614862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.614938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.614961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.598 [2024-12-08 06:04:32.614973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.614985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.615006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.615022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.598 [2024-12-08 06:04:32.615033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.615048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.623703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.623795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.598 [2024-12-08 06:04:32.623814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.623828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.632161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.632240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.598 [2024-12-08 06:04:32.632257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.632296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.632520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.632562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.598 [2024-12-08 06:04:32.632577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.632590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.632648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.632687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.598 [2024-12-08 06:04:32.632699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.632730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.632866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.632890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.598 [2024-12-08 06:04:32.632905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.632926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.632979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.633001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.598 [2024-12-08 06:04:32.633013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.633026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.633067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.633083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.598 [2024-12-08 06:04:32.633097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.633109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.633156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.598 [2024-12-08 06:04:32.633175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.598 [2024-12-08 06:04:32.633194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.598 [2024-12-08 06:04:32.633209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.598 [2024-12-08 06:04:32.633405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 223.439 ms, result 0 00:17:09.598 true 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85953 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85953 ']' 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85953 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85953 00:17:09.856 killing process with pid 85953 00:17:09.856 Received shutdown signal, test time was about 4.000000 seconds 00:17:09.856 00:17:09.856 Latency(us) 00:17:09.856 [2024-12-08T06:04:32.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.856 [2024-12-08T06:04:32.901Z] =================================================================================================================== 00:17:09.856 [2024-12-08T06:04:32.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85953' 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85953 00:17:09.856 06:04:32 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85953 00:17:13.144 Remove shared memory files 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:13.144 00:17:13.144 real 0m22.627s 00:17:13.144 user 0m26.175s 00:17:13.144 sys 0m0.943s 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.144 ************************************ 00:17:13.144 END TEST ftl_bdevperf 00:17:13.144 06:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:13.144 ************************************ 00:17:13.144 06:04:35 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:13.144 06:04:35 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:13.144 06:04:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.144 06:04:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:13.144 ************************************ 00:17:13.144 START TEST ftl_trim 00:17:13.144 ************************************ 00:17:13.144 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:13.144 * Looking for test storage... 00:17:13.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:13.144 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:13.144 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:13.144 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:13.144 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:13.144 06:04:35 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.145 06:04:35 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.145 06:04:35 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.145 06:04:35 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:13.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.145 --rc genhtml_branch_coverage=1 00:17:13.145 --rc genhtml_function_coverage=1 00:17:13.145 --rc genhtml_legend=1 00:17:13.145 --rc geninfo_all_blocks=1 00:17:13.145 --rc geninfo_unexecuted_blocks=1 00:17:13.145 00:17:13.145 ' 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:13.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.145 --rc genhtml_branch_coverage=1 00:17:13.145 --rc genhtml_function_coverage=1 00:17:13.145 --rc genhtml_legend=1 00:17:13.145 --rc geninfo_all_blocks=1 00:17:13.145 --rc geninfo_unexecuted_blocks=1 00:17:13.145 00:17:13.145 ' 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:13.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.145 --rc genhtml_branch_coverage=1 00:17:13.145 --rc genhtml_function_coverage=1 00:17:13.145 --rc genhtml_legend=1 00:17:13.145 --rc geninfo_all_blocks=1 00:17:13.145 --rc geninfo_unexecuted_blocks=1 00:17:13.145 00:17:13.145 ' 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:13.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.145 --rc genhtml_branch_coverage=1 00:17:13.145 --rc genhtml_function_coverage=1 00:17:13.145 --rc genhtml_legend=1 00:17:13.145 --rc geninfo_all_blocks=1 00:17:13.145 --rc geninfo_unexecuted_blocks=1 00:17:13.145 00:17:13.145 ' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86280 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:13.145 06:04:35 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86280 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86280 ']' 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.145 06:04:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:13.145 [2024-12-08 06:04:35.869643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:13.145 [2024-12-08 06:04:35.870686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86280 ] 00:17:13.145 [2024-12-08 06:04:36.019366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:13.145 [2024-12-08 06:04:36.056112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.145 [2024-12-08 06:04:36.056159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.145 [2024-12-08 06:04:36.056240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.081 06:04:36 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.081 06:04:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:14.081 06:04:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:14.081 06:04:36 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:14.081 06:04:36 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:14.081 06:04:36 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:14.081 06:04:36 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:14.081 06:04:36 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:14.339 06:04:37 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:14.339 06:04:37 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:14.339 06:04:37 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:14.339 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:14.339 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:14.339 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:14.339 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:14.339 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:14.597 { 00:17:14.597 "name": "nvme0n1", 00:17:14.597 "aliases": [ 00:17:14.597 "6d142e16-9a39-40e5-ab6a-fdf804cf737f" 00:17:14.597 ], 00:17:14.597 "product_name": "NVMe disk", 00:17:14.597 "block_size": 4096, 00:17:14.597 "num_blocks": 1310720, 00:17:14.597 "uuid": "6d142e16-9a39-40e5-ab6a-fdf804cf737f", 00:17:14.597 "numa_id": -1, 00:17:14.597 "assigned_rate_limits": { 00:17:14.597 "rw_ios_per_sec": 0, 00:17:14.597 "rw_mbytes_per_sec": 0, 00:17:14.597 "r_mbytes_per_sec": 0, 00:17:14.597 "w_mbytes_per_sec": 0 00:17:14.597 }, 00:17:14.597 "claimed": true, 00:17:14.597 "claim_type": "read_many_write_one", 00:17:14.597 "zoned": false, 00:17:14.597 "supported_io_types": { 00:17:14.597 "read": true, 00:17:14.597 "write": true, 00:17:14.597 "unmap": true, 00:17:14.597 "flush": true, 00:17:14.597 "reset": true, 00:17:14.597 "nvme_admin": true, 00:17:14.597 "nvme_io": true, 00:17:14.597 "nvme_io_md": false, 00:17:14.597 "write_zeroes": true, 00:17:14.597 "zcopy": false, 00:17:14.597 "get_zone_info": false, 00:17:14.597 "zone_management": false, 00:17:14.597 "zone_append": false, 00:17:14.597 "compare": true, 00:17:14.597 "compare_and_write": false, 00:17:14.597 "abort": true, 00:17:14.597 "seek_hole": false, 00:17:14.597 "seek_data": false, 00:17:14.597 "copy": true, 00:17:14.597 "nvme_iov_md": false 00:17:14.597 }, 00:17:14.597 "driver_specific": { 00:17:14.597 "nvme": [ 00:17:14.597 { 00:17:14.597 "pci_address": "0000:00:11.0", 00:17:14.597 "trid": { 00:17:14.597 "trtype": "PCIe", 00:17:14.597 "traddr": "0000:00:11.0" 00:17:14.597 }, 00:17:14.597 "ctrlr_data": { 00:17:14.597 "cntlid": 0, 00:17:14.597 "vendor_id": "0x1b36", 00:17:14.597 "model_number": "QEMU NVMe Ctrl", 00:17:14.597 "serial_number": "12341", 00:17:14.597 "firmware_revision": "8.0.0", 00:17:14.597 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:14.597 "oacs": { 00:17:14.597 "security": 0, 00:17:14.597 "format": 1, 00:17:14.597 "firmware": 0, 00:17:14.597 "ns_manage": 1 00:17:14.597 }, 00:17:14.597 "multi_ctrlr": false, 00:17:14.597 "ana_reporting": false 00:17:14.597 }, 00:17:14.597 "vs": { 00:17:14.597 "nvme_version": "1.4" 00:17:14.597 }, 00:17:14.597 "ns_data": { 00:17:14.597 "id": 1, 00:17:14.597 "can_share": false 00:17:14.597 } 00:17:14.597 } 00:17:14.597 ], 00:17:14.597 "mp_policy": "active_passive" 00:17:14.597 } 00:17:14.597 } 00:17:14.597 ]' 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:14.597 06:04:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:14.597 06:04:37 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:14.597 06:04:37 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:14.597 06:04:37 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:14.597 06:04:37 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:14.597 06:04:37 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:14.855 06:04:37 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=03fc5077-0abf-46a5-a58b-ceb417aa875b 00:17:14.855 06:04:37 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:14.855 06:04:37 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03fc5077-0abf-46a5-a58b-ceb417aa875b 00:17:15.421 06:04:38 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:15.421 06:04:38 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5bec372f-74a4-4c96-8491-a140adb2cb3b 00:17:15.421 06:04:38 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5bec372f-74a4-4c96-8491-a140adb2cb3b 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:15.679 06:04:38 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:15.679 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:15.679 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:15.679 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:15.679 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:15.679 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:16.246 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.246 { 00:17:16.246 "name": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:16.246 "aliases": [ 00:17:16.246 "lvs/nvme0n1p0" 00:17:16.246 ], 00:17:16.246 "product_name": "Logical Volume", 00:17:16.246 "block_size": 4096, 00:17:16.246 "num_blocks": 26476544, 00:17:16.246 "uuid": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:16.246 "assigned_rate_limits": { 00:17:16.246 "rw_ios_per_sec": 0, 00:17:16.246 "rw_mbytes_per_sec": 0, 00:17:16.246 "r_mbytes_per_sec": 0, 00:17:16.246 "w_mbytes_per_sec": 0 00:17:16.246 }, 00:17:16.246 "claimed": false, 00:17:16.246 "zoned": false, 00:17:16.246 "supported_io_types": { 00:17:16.246 "read": true, 00:17:16.246 "write": true, 00:17:16.246 "unmap": true, 00:17:16.246 "flush": false, 00:17:16.246 "reset": true, 00:17:16.246 "nvme_admin": false, 00:17:16.246 "nvme_io": false, 00:17:16.246 "nvme_io_md": false, 00:17:16.246 "write_zeroes": true, 00:17:16.246 "zcopy": false, 00:17:16.246 "get_zone_info": false, 00:17:16.246 "zone_management": false, 00:17:16.246 "zone_append": false, 00:17:16.246 "compare": false, 00:17:16.246 "compare_and_write": false, 00:17:16.246 "abort": false, 00:17:16.246 "seek_hole": true, 00:17:16.246 "seek_data": true, 00:17:16.246 "copy": false, 00:17:16.246 "nvme_iov_md": false 00:17:16.246 }, 00:17:16.246 "driver_specific": { 00:17:16.246 "lvol": { 00:17:16.246 "lvol_store_uuid": "5bec372f-74a4-4c96-8491-a140adb2cb3b", 00:17:16.246 "base_bdev": "nvme0n1", 00:17:16.246 "thin_provision": true, 00:17:16.246 "num_allocated_clusters": 0, 00:17:16.246 "snapshot": false, 00:17:16.246 "clone": false, 00:17:16.246 "esnap_clone": false 00:17:16.246 } 00:17:16.246 } 00:17:16.246 } 00:17:16.246 ]' 00:17:16.246 06:04:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:16.246 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:16.246 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:16.246 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:16.246 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:16.246 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:16.246 06:04:39 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:16.246 06:04:39 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:16.246 06:04:39 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:16.504 06:04:39 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:16.504 06:04:39 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:16.504 06:04:39 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:16.504 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:16.504 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:16.504 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:16.504 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:16.504 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:16.763 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.763 { 00:17:16.763 "name": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:16.763 "aliases": [ 00:17:16.763 "lvs/nvme0n1p0" 00:17:16.763 ], 00:17:16.763 "product_name": "Logical Volume", 00:17:16.763 "block_size": 4096, 00:17:16.763 "num_blocks": 26476544, 00:17:16.763 "uuid": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:16.763 "assigned_rate_limits": { 00:17:16.763 "rw_ios_per_sec": 0, 00:17:16.763 "rw_mbytes_per_sec": 0, 00:17:16.763 "r_mbytes_per_sec": 0, 00:17:16.763 "w_mbytes_per_sec": 0 00:17:16.763 }, 00:17:16.763 "claimed": false, 00:17:16.763 "zoned": false, 00:17:16.763 "supported_io_types": { 00:17:16.763 "read": true, 00:17:16.763 "write": true, 00:17:16.763 "unmap": true, 00:17:16.763 "flush": false, 00:17:16.763 "reset": true, 00:17:16.763 "nvme_admin": false, 00:17:16.763 "nvme_io": false, 00:17:16.763 "nvme_io_md": false, 00:17:16.763 "write_zeroes": true, 00:17:16.763 "zcopy": false, 00:17:16.763 "get_zone_info": false, 00:17:16.763 "zone_management": false, 00:17:16.763 "zone_append": false, 00:17:16.763 "compare": false, 00:17:16.763 "compare_and_write": false, 00:17:16.763 "abort": false, 00:17:16.763 "seek_hole": true, 00:17:16.763 "seek_data": true, 00:17:16.763 "copy": false, 00:17:16.763 "nvme_iov_md": false 00:17:16.763 }, 00:17:16.763 "driver_specific": { 00:17:16.763 "lvol": { 00:17:16.763 "lvol_store_uuid": "5bec372f-74a4-4c96-8491-a140adb2cb3b", 00:17:16.763 "base_bdev": "nvme0n1", 00:17:16.763 "thin_provision": true, 00:17:16.763 "num_allocated_clusters": 0, 00:17:16.763 "snapshot": false, 00:17:16.763 "clone": false, 00:17:16.763 "esnap_clone": false 00:17:16.763 } 00:17:16.763 } 00:17:16.763 } 00:17:16.763 ]' 00:17:16.763 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:17.021 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:17.021 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:17.021 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:17.021 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:17.021 06:04:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:17.021 06:04:39 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:17.021 06:04:39 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:17.280 06:04:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:17.280 06:04:40 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:17.280 06:04:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:17.280 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:17.280 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:17.280 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:17.280 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:17.280 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 919dd7f2-8a27-4f8b-96c9-eb224241ac56 00:17:17.553 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:17.553 { 00:17:17.553 "name": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:17.553 "aliases": [ 00:17:17.553 "lvs/nvme0n1p0" 00:17:17.553 ], 00:17:17.553 "product_name": "Logical Volume", 00:17:17.553 "block_size": 4096, 00:17:17.553 "num_blocks": 26476544, 00:17:17.553 "uuid": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:17.553 "assigned_rate_limits": { 00:17:17.553 "rw_ios_per_sec": 0, 00:17:17.553 "rw_mbytes_per_sec": 0, 00:17:17.553 "r_mbytes_per_sec": 0, 00:17:17.553 "w_mbytes_per_sec": 0 00:17:17.553 }, 00:17:17.553 "claimed": false, 00:17:17.553 "zoned": false, 00:17:17.553 "supported_io_types": { 00:17:17.553 "read": true, 00:17:17.553 "write": true, 00:17:17.553 "unmap": true, 00:17:17.553 "flush": false, 00:17:17.553 "reset": true, 00:17:17.553 "nvme_admin": false, 00:17:17.553 "nvme_io": false, 00:17:17.553 "nvme_io_md": false, 00:17:17.553 "write_zeroes": true, 00:17:17.553 "zcopy": false, 00:17:17.553 "get_zone_info": false, 00:17:17.553 "zone_management": false, 00:17:17.553 "zone_append": false, 00:17:17.553 "compare": false, 00:17:17.553 "compare_and_write": false, 00:17:17.553 "abort": false, 00:17:17.553 "seek_hole": true, 00:17:17.553 "seek_data": true, 00:17:17.553 "copy": false, 00:17:17.553 "nvme_iov_md": false 00:17:17.553 }, 00:17:17.553 "driver_specific": { 00:17:17.553 "lvol": { 00:17:17.554 "lvol_store_uuid": "5bec372f-74a4-4c96-8491-a140adb2cb3b", 00:17:17.554 "base_bdev": "nvme0n1", 00:17:17.554 "thin_provision": true, 00:17:17.554 "num_allocated_clusters": 0, 00:17:17.554 "snapshot": false, 00:17:17.554 "clone": false, 00:17:17.554 "esnap_clone": false 00:17:17.554 } 00:17:17.554 } 00:17:17.554 } 00:17:17.554 ]' 00:17:17.554 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:17.554 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:17.554 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:17.554 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:17.554 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:17.554 06:04:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:17.554 06:04:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:17.554 06:04:40 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 919dd7f2-8a27-4f8b-96c9-eb224241ac56 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:17.823 [2024-12-08 06:04:40.750863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.750927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.824 [2024-12-08 06:04:40.750963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:17.824 [2024-12-08 06:04:40.750977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.754006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.754051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.824 [2024-12-08 06:04:40.754084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:17:17.824 [2024-12-08 06:04:40.754100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.754272] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.824 [2024-12-08 06:04:40.754641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.824 [2024-12-08 06:04:40.754686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.754704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.824 [2024-12-08 06:04:40.754719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:17.824 [2024-12-08 06:04:40.754733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.754968] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:17.824 [2024-12-08 06:04:40.756056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.756093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:17.824 [2024-12-08 06:04:40.756128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:17.824 [2024-12-08 06:04:40.756139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.760833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.760878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.824 [2024-12-08 06:04:40.760898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:17:17.824 [2024-12-08 06:04:40.760928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.761107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.761129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.824 [2024-12-08 06:04:40.761145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:17.824 [2024-12-08 06:04:40.761157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.761228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.761245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.824 [2024-12-08 06:04:40.761267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:17.824 [2024-12-08 06:04:40.761279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.761326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.824 [2024-12-08 06:04:40.762825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.762867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.824 [2024-12-08 06:04:40.762886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:17:17.824 [2024-12-08 06:04:40.762919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.762974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.762995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.824 [2024-12-08 06:04:40.763008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:17.824 [2024-12-08 06:04:40.763024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.763060] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:17.824 [2024-12-08 06:04:40.763240] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:17.824 [2024-12-08 06:04:40.763265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.824 [2024-12-08 06:04:40.763300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:17.824 [2024-12-08 06:04:40.763331] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.824 [2024-12-08 06:04:40.763348] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.824 [2024-12-08 06:04:40.763361] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.824 [2024-12-08 06:04:40.763375] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.824 [2024-12-08 06:04:40.763386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:17.824 [2024-12-08 06:04:40.763399] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:17.824 [2024-12-08 06:04:40.763412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.763427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.824 [2024-12-08 06:04:40.763451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:17.824 [2024-12-08 06:04:40.763470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.763580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.824 [2024-12-08 06:04:40.763602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.824 [2024-12-08 06:04:40.763614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:17.824 [2024-12-08 06:04:40.763629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.824 [2024-12-08 06:04:40.763793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.824 [2024-12-08 06:04:40.763813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.824 [2024-12-08 06:04:40.763826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.824 [2024-12-08 06:04:40.763840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.763854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.824 [2024-12-08 06:04:40.763866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.763877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.824 [2024-12-08 06:04:40.763892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.824 [2024-12-08 06:04:40.763903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.824 [2024-12-08 06:04:40.763916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.824 [2024-12-08 06:04:40.763926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.824 [2024-12-08 06:04:40.763940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.824 [2024-12-08 06:04:40.763951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.824 [2024-12-08 06:04:40.763966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.824 [2024-12-08 06:04:40.763978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:17.824 [2024-12-08 06:04:40.763990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.824 [2024-12-08 06:04:40.764014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:17.824 [2024-12-08 06:04:40.764024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.824 [2024-12-08 06:04:40.764048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.824 [2024-12-08 06:04:40.764071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.824 [2024-12-08 06:04:40.764083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.824 [2024-12-08 06:04:40.764106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.824 [2024-12-08 06:04:40.764117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.824 [2024-12-08 06:04:40.764162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.824 [2024-12-08 06:04:40.764177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.824 [2024-12-08 06:04:40.764516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.824 [2024-12-08 06:04:40.764636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:17.824 [2024-12-08 06:04:40.764692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.824 [2024-12-08 06:04:40.764748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.824 [2024-12-08 06:04:40.764899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:17.824 [2024-12-08 06:04:40.764952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.824 [2024-12-08 06:04:40.765110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:17.824 [2024-12-08 06:04:40.765162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:17.824 [2024-12-08 06:04:40.765405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.765459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:17.824 [2024-12-08 06:04:40.765502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:17.824 [2024-12-08 06:04:40.765618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.765676] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.824 [2024-12-08 06:04:40.765716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.824 [2024-12-08 06:04:40.765759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.824 [2024-12-08 06:04:40.765812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.824 [2024-12-08 06:04:40.765911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.825 [2024-12-08 06:04:40.765966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.825 [2024-12-08 06:04:40.766004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.825 [2024-12-08 06:04:40.766041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.825 [2024-12-08 06:04:40.766094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.825 [2024-12-08 06:04:40.766297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.825 [2024-12-08 06:04:40.766330] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.825 [2024-12-08 06:04:40.766347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.825 [2024-12-08 06:04:40.766376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:17.825 [2024-12-08 06:04:40.766392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.825 [2024-12-08 06:04:40.766404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:17.825 [2024-12-08 06:04:40.766418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:17.825 [2024-12-08 06:04:40.766431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:17.825 [2024-12-08 06:04:40.766448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:17.825 [2024-12-08 06:04:40.766460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:17.825 [2024-12-08 06:04:40.766474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:17.825 [2024-12-08 06:04:40.766486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:17.825 [2024-12-08 06:04:40.766553] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.825 [2024-12-08 06:04:40.766567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.825 [2024-12-08 06:04:40.766594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.825 [2024-12-08 06:04:40.766608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.825 [2024-12-08 06:04:40.766620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.825 [2024-12-08 06:04:40.766637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.825 [2024-12-08 06:04:40.766650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.825 [2024-12-08 06:04:40.766673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:17:17.825 [2024-12-08 06:04:40.766687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.825 [2024-12-08 06:04:40.766842] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:17.825 [2024-12-08 06:04:40.766862] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:20.357 [2024-12-08 06:04:42.803313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.803634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:20.357 [2024-12-08 06:04:42.803675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2036.477 ms 00:17:20.357 [2024-12-08 06:04:42.803690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.819741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.819822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.357 [2024-12-08 06:04:42.819864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.933 ms 00:17:20.357 [2024-12-08 06:04:42.819898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.820115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.820137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:20.357 [2024-12-08 06:04:42.820153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:20.357 [2024-12-08 06:04:42.820165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.829928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.830192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.357 [2024-12-08 06:04:42.830254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.670 ms 00:17:20.357 [2024-12-08 06:04:42.830271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.830425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.830448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.357 [2024-12-08 06:04:42.830478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:20.357 [2024-12-08 06:04:42.830492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.830887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.830909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.357 [2024-12-08 06:04:42.830927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:17:20.357 [2024-12-08 06:04:42.830942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.831146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.831168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.357 [2024-12-08 06:04:42.831210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:20.357 [2024-12-08 06:04:42.831227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.837543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.837604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.357 [2024-12-08 06:04:42.837625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.267 ms 00:17:20.357 [2024-12-08 06:04:42.837638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.846915] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:20.357 [2024-12-08 06:04:42.861167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.861296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:20.357 [2024-12-08 06:04:42.861320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.361 ms 00:17:20.357 [2024-12-08 06:04:42.861335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.913370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.913466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:20.357 [2024-12-08 06:04:42.913486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.902 ms 00:17:20.357 [2024-12-08 06:04:42.913503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.913746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.913769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:20.357 [2024-12-08 06:04:42.913782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:20.357 [2024-12-08 06:04:42.913798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.357 [2024-12-08 06:04:42.917691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.357 [2024-12-08 06:04:42.917752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:20.358 [2024-12-08 06:04:42.917770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:17:20.358 [2024-12-08 06:04:42.917783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.921017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.921076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:20.358 [2024-12-08 06:04:42.921093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:17:20.358 [2024-12-08 06:04:42.921106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.921537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.921590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:20.358 [2024-12-08 06:04:42.921605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:17:20.358 [2024-12-08 06:04:42.921625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.951853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.951936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:20.358 [2024-12-08 06:04:42.951956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.166 ms 00:17:20.358 [2024-12-08 06:04:42.951969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.956579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.956658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:20.358 [2024-12-08 06:04:42.956725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.516 ms 00:17:20.358 [2024-12-08 06:04:42.956757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.960572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.960647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:20.358 [2024-12-08 06:04:42.960663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:17:20.358 [2024-12-08 06:04:42.960676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.964669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.964731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:20.358 [2024-12-08 06:04:42.964763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:17:20.358 [2024-12-08 06:04:42.964779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.964859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.964882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:20.358 [2024-12-08 06:04:42.964896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:20.358 [2024-12-08 06:04:42.964913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.964999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.358 [2024-12-08 06:04:42.965017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:20.358 [2024-12-08 06:04:42.965028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:20.358 [2024-12-08 06:04:42.965041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.358 [2024-12-08 06:04:42.966331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.358 [2024-12-08 06:04:42.967631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2214.952 ms, result 0 00:17:20.358 [2024-12-08 06:04:42.968524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.358 { 00:17:20.358 "name": "ftl0", 00:17:20.358 "uuid": "c471906e-ccc4-48c5-8290-7cb3cbdef2d5" 00:17:20.358 } 00:17:20.358 06:04:42 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:20.358 06:04:42 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:20.358 06:04:42 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:20.358 06:04:42 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:20.358 06:04:42 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:20.358 06:04:42 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:20.358 06:04:42 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:20.358 06:04:43 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:20.618 [ 00:17:20.618 { 00:17:20.618 "name": "ftl0", 00:17:20.618 "aliases": [ 00:17:20.618 "c471906e-ccc4-48c5-8290-7cb3cbdef2d5" 00:17:20.618 ], 00:17:20.618 "product_name": "FTL disk", 00:17:20.618 "block_size": 4096, 00:17:20.618 "num_blocks": 23592960, 00:17:20.618 "uuid": "c471906e-ccc4-48c5-8290-7cb3cbdef2d5", 00:17:20.618 "assigned_rate_limits": { 00:17:20.618 "rw_ios_per_sec": 0, 00:17:20.618 "rw_mbytes_per_sec": 0, 00:17:20.618 "r_mbytes_per_sec": 0, 00:17:20.618 "w_mbytes_per_sec": 0 00:17:20.618 }, 00:17:20.618 "claimed": false, 00:17:20.618 "zoned": false, 00:17:20.618 "supported_io_types": { 00:17:20.618 "read": true, 00:17:20.618 "write": true, 00:17:20.618 "unmap": true, 00:17:20.618 "flush": true, 00:17:20.618 "reset": false, 00:17:20.618 "nvme_admin": false, 00:17:20.618 "nvme_io": false, 00:17:20.618 "nvme_io_md": false, 00:17:20.618 "write_zeroes": true, 00:17:20.618 "zcopy": false, 00:17:20.618 "get_zone_info": false, 00:17:20.618 "zone_management": false, 00:17:20.618 "zone_append": false, 00:17:20.618 "compare": false, 00:17:20.618 "compare_and_write": false, 00:17:20.618 "abort": false, 00:17:20.618 "seek_hole": false, 00:17:20.618 "seek_data": false, 00:17:20.618 "copy": false, 00:17:20.618 "nvme_iov_md": false 00:17:20.618 }, 00:17:20.618 "driver_specific": { 00:17:20.618 "ftl": { 00:17:20.618 "base_bdev": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:20.618 "cache": "nvc0n1p0" 00:17:20.618 } 00:17:20.618 } 00:17:20.618 } 00:17:20.618 ] 00:17:20.618 06:04:43 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:20.618 06:04:43 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:20.618 06:04:43 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:20.883 06:04:43 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:21.153 06:04:43 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:21.153 06:04:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:21.153 { 00:17:21.153 "name": "ftl0", 00:17:21.153 "aliases": [ 00:17:21.153 "c471906e-ccc4-48c5-8290-7cb3cbdef2d5" 00:17:21.153 ], 00:17:21.153 "product_name": "FTL disk", 00:17:21.153 "block_size": 4096, 00:17:21.153 "num_blocks": 23592960, 00:17:21.153 "uuid": "c471906e-ccc4-48c5-8290-7cb3cbdef2d5", 00:17:21.153 "assigned_rate_limits": { 00:17:21.153 "rw_ios_per_sec": 0, 00:17:21.153 "rw_mbytes_per_sec": 0, 00:17:21.153 "r_mbytes_per_sec": 0, 00:17:21.153 "w_mbytes_per_sec": 0 00:17:21.153 }, 00:17:21.153 "claimed": false, 00:17:21.153 "zoned": false, 00:17:21.153 "supported_io_types": { 00:17:21.153 "read": true, 00:17:21.153 "write": true, 00:17:21.153 "unmap": true, 00:17:21.153 "flush": true, 00:17:21.153 "reset": false, 00:17:21.153 "nvme_admin": false, 00:17:21.153 "nvme_io": false, 00:17:21.153 "nvme_io_md": false, 00:17:21.153 "write_zeroes": true, 00:17:21.153 "zcopy": false, 00:17:21.153 "get_zone_info": false, 00:17:21.153 "zone_management": false, 00:17:21.153 "zone_append": false, 00:17:21.153 "compare": false, 00:17:21.153 "compare_and_write": false, 00:17:21.153 "abort": false, 00:17:21.153 "seek_hole": false, 00:17:21.153 "seek_data": false, 00:17:21.153 "copy": false, 00:17:21.153 "nvme_iov_md": false 00:17:21.153 }, 00:17:21.153 "driver_specific": { 00:17:21.153 "ftl": { 00:17:21.153 "base_bdev": "919dd7f2-8a27-4f8b-96c9-eb224241ac56", 00:17:21.153 "cache": "nvc0n1p0" 00:17:21.153 } 00:17:21.153 } 00:17:21.153 } 00:17:21.153 ]' 00:17:21.153 06:04:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:21.412 06:04:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:21.412 06:04:44 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:21.672 [2024-12-08 06:04:44.466233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.466293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.672 [2024-12-08 06:04:44.466349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.672 [2024-12-08 06:04:44.466363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.466437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:21.672 [2024-12-08 06:04:44.466888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.466907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.672 [2024-12-08 06:04:44.466920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:21.672 [2024-12-08 06:04:44.466936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.467535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.467575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.672 [2024-12-08 06:04:44.467591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:17:21.672 [2024-12-08 06:04:44.467609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.471413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.471473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:21.672 [2024-12-08 06:04:44.471489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:17:21.672 [2024-12-08 06:04:44.471504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.479376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.479415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:21.672 [2024-12-08 06:04:44.479432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.789 ms 00:17:21.672 [2024-12-08 06:04:44.479471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.480968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.481019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:21.672 [2024-12-08 06:04:44.481036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:17:21.672 [2024-12-08 06:04:44.481050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.485035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.485084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:21.672 [2024-12-08 06:04:44.485102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.929 ms 00:17:21.672 [2024-12-08 06:04:44.485117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.485333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.485359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:21.672 [2024-12-08 06:04:44.485374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:21.672 [2024-12-08 06:04:44.485391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.486988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.487032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:21.672 [2024-12-08 06:04:44.487049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:17:21.672 [2024-12-08 06:04:44.487064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.488536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.488581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:21.672 [2024-12-08 06:04:44.488597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:17:21.672 [2024-12-08 06:04:44.488610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.489826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.489991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.672 [2024-12-08 06:04:44.490017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:17:21.672 [2024-12-08 06:04:44.490033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.491326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.672 [2024-12-08 06:04:44.491368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.672 [2024-12-08 06:04:44.491384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:17:21.672 [2024-12-08 06:04:44.491398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.672 [2024-12-08 06:04:44.491458] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.672 [2024-12-08 06:04:44.491487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.672 [2024-12-08 06:04:44.491762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.491988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.673 [2024-12-08 06:04:44.492881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.673 [2024-12-08 06:04:44.492894] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:21.673 [2024-12-08 06:04:44.492908] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:21.673 [2024-12-08 06:04:44.492920] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:21.673 [2024-12-08 06:04:44.492934] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:21.673 [2024-12-08 06:04:44.492946] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:21.673 [2024-12-08 06:04:44.492959] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.673 [2024-12-08 06:04:44.492970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.673 [2024-12-08 06:04:44.493003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.673 [2024-12-08 06:04:44.493014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.673 [2024-12-08 06:04:44.493026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.674 [2024-12-08 06:04:44.493040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.674 [2024-12-08 06:04:44.493054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.674 [2024-12-08 06:04:44.493067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:17:21.674 [2024-12-08 06:04:44.493082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.494580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.674 [2024-12-08 06:04:44.494615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.674 [2024-12-08 06:04:44.494630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:17:21.674 [2024-12-08 06:04:44.494644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.494738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.674 [2024-12-08 06:04:44.494756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.674 [2024-12-08 06:04:44.494769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:21.674 [2024-12-08 06:04:44.494783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.500338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.500388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.674 [2024-12-08 06:04:44.500406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.500423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.500555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.500577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.674 [2024-12-08 06:04:44.500605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.500621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.500713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.500741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.674 [2024-12-08 06:04:44.500755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.500769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.500824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.500846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.674 [2024-12-08 06:04:44.500859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.500873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.510132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.510228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.674 [2024-12-08 06:04:44.510249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.510267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.518258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.518331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.674 [2024-12-08 06:04:44.518365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.518396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.518512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.518534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.674 [2024-12-08 06:04:44.518547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.518560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.518616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.518636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.674 [2024-12-08 06:04:44.518649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.518662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.518771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.518793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.674 [2024-12-08 06:04:44.518805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.518818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.518883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.518904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.674 [2024-12-08 06:04:44.518920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.518936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.518992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.519010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.674 [2024-12-08 06:04:44.519022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.519035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.519102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.674 [2024-12-08 06:04:44.519124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.674 [2024-12-08 06:04:44.519136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.674 [2024-12-08 06:04:44.519149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.674 [2024-12-08 06:04:44.519640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.402 ms, result 0 00:17:21.674 true 00:17:21.674 06:04:44 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86280 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86280 ']' 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86280 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86280 00:17:21.674 killing process with pid 86280 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86280' 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86280 00:17:21.674 06:04:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86280 00:17:24.964 06:04:47 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:25.899 65536+0 records in 00:17:25.899 65536+0 records out 00:17:25.899 268435456 bytes (268 MB, 256 MiB) copied, 1.06352 s, 252 MB/s 00:17:25.899 06:04:48 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.899 [2024-12-08 06:04:48.698853] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:25.899 [2024-12-08 06:04:48.699271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86457 ] 00:17:25.899 [2024-12-08 06:04:48.851652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.899 [2024-12-08 06:04:48.894465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.159 [2024-12-08 06:04:48.986058] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.159 [2024-12-08 06:04:48.986456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.159 [2024-12-08 06:04:49.142896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.159 [2024-12-08 06:04:49.142959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.159 [2024-12-08 06:04:49.142996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.159 [2024-12-08 06:04:49.143019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.159 [2024-12-08 06:04:49.145874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.159 [2024-12-08 06:04:49.145923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.159 [2024-12-08 06:04:49.145956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:17:26.159 [2024-12-08 06:04:49.145981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.159 [2024-12-08 06:04:49.146096] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.159 [2024-12-08 06:04:49.146433] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.159 [2024-12-08 06:04:49.146464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.159 [2024-12-08 06:04:49.146477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.159 [2024-12-08 06:04:49.146495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:26.159 [2024-12-08 06:04:49.146516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.159 [2024-12-08 06:04:49.147886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.159 [2024-12-08 06:04:49.150216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.159 [2024-12-08 06:04:49.150291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.159 [2024-12-08 06:04:49.150331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.331 ms 00:17:26.159 [2024-12-08 06:04:49.150346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.159 [2024-12-08 06:04:49.150440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.159 [2024-12-08 06:04:49.150461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.159 [2024-12-08 06:04:49.150475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:26.159 [2024-12-08 06:04:49.150497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.159 [2024-12-08 06:04:49.155066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.159 [2024-12-08 06:04:49.155106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.159 [2024-12-08 06:04:49.155148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:17:26.159 [2024-12-08 06:04:49.155158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.155358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.160 [2024-12-08 06:04:49.155384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.160 [2024-12-08 06:04:49.155400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:26.160 [2024-12-08 06:04:49.155412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.155480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.160 [2024-12-08 06:04:49.155500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.160 [2024-12-08 06:04:49.155522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:26.160 [2024-12-08 06:04:49.155534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.155572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.160 [2024-12-08 06:04:49.156992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.160 [2024-12-08 06:04:49.157048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.160 [2024-12-08 06:04:49.157094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:17:26.160 [2024-12-08 06:04:49.157105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.157151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.160 [2024-12-08 06:04:49.157166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.160 [2024-12-08 06:04:49.157184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:26.160 [2024-12-08 06:04:49.157198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.157257] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.160 [2024-12-08 06:04:49.157288] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.160 [2024-12-08 06:04:49.157369] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.160 [2024-12-08 06:04:49.157393] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.160 [2024-12-08 06:04:49.157506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.160 [2024-12-08 06:04:49.157536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.160 [2024-12-08 06:04:49.157551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.160 [2024-12-08 06:04:49.157566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.160 [2024-12-08 06:04:49.157579] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.160 [2024-12-08 06:04:49.157601] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.160 [2024-12-08 06:04:49.157627] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.160 [2024-12-08 06:04:49.157639] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.160 [2024-12-08 06:04:49.157650] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.160 [2024-12-08 06:04:49.157662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.160 [2024-12-08 06:04:49.157673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.160 [2024-12-08 06:04:49.157693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:17:26.160 [2024-12-08 06:04:49.157705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.157831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.160 [2024-12-08 06:04:49.157846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.160 [2024-12-08 06:04:49.157859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:26.160 [2024-12-08 06:04:49.157870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.160 [2024-12-08 06:04:49.158048] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.160 [2024-12-08 06:04:49.158067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.160 [2024-12-08 06:04:49.158080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.160 [2024-12-08 06:04:49.158122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.160 [2024-12-08 06:04:49.158158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.160 [2024-12-08 06:04:49.158180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.160 [2024-12-08 06:04:49.158191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.160 [2024-12-08 06:04:49.158201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.160 [2024-12-08 06:04:49.158212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.160 [2024-12-08 06:04:49.158223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.160 [2024-12-08 06:04:49.158233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.160 [2024-12-08 06:04:49.158257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.160 [2024-12-08 06:04:49.158308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.160 [2024-12-08 06:04:49.158340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.160 [2024-12-08 06:04:49.158383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.160 [2024-12-08 06:04:49.158414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.160 [2024-12-08 06:04:49.158434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.160 [2024-12-08 06:04:49.158445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.160 [2024-12-08 06:04:49.158466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.160 [2024-12-08 06:04:49.158477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.160 [2024-12-08 06:04:49.158487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.160 [2024-12-08 06:04:49.158498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.160 [2024-12-08 06:04:49.158509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.160 [2024-12-08 06:04:49.158519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.160 [2024-12-08 06:04:49.158533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.160 [2024-12-08 06:04:49.158544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.160 [2024-12-08 06:04:49.158555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.161 [2024-12-08 06:04:49.158565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.161 [2024-12-08 06:04:49.158577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.161 [2024-12-08 06:04:49.158599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.161 [2024-12-08 06:04:49.158610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.161 [2024-12-08 06:04:49.158622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.161 [2024-12-08 06:04:49.158634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.161 [2024-12-08 06:04:49.158645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.161 [2024-12-08 06:04:49.158656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.161 [2024-12-08 06:04:49.158666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.161 [2024-12-08 06:04:49.158677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.161 [2024-12-08 06:04:49.158690] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.161 [2024-12-08 06:04:49.158704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.161 [2024-12-08 06:04:49.158731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.161 [2024-12-08 06:04:49.158744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.161 [2024-12-08 06:04:49.158756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.161 [2024-12-08 06:04:49.158768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.161 [2024-12-08 06:04:49.158794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.161 [2024-12-08 06:04:49.158805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.161 [2024-12-08 06:04:49.158828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.161 [2024-12-08 06:04:49.158854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.161 [2024-12-08 06:04:49.158880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.161 [2024-12-08 06:04:49.158936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.161 [2024-12-08 06:04:49.158948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.161 [2024-12-08 06:04:49.158976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.161 [2024-12-08 06:04:49.158989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.161 [2024-12-08 06:04:49.159000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.161 [2024-12-08 06:04:49.159013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.159024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.161 [2024-12-08 06:04:49.159039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:17:26.161 [2024-12-08 06:04:49.159051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.175720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.175988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.161 [2024-12-08 06:04:49.176040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.576 ms 00:17:26.161 [2024-12-08 06:04:49.176071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.176378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.176422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.161 [2024-12-08 06:04:49.176444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:26.161 [2024-12-08 06:04:49.176479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.186692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.186917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.161 [2024-12-08 06:04:49.186952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.158 ms 00:17:26.161 [2024-12-08 06:04:49.186969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.187115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.187139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.161 [2024-12-08 06:04:49.187161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.161 [2024-12-08 06:04:49.187216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.187653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.187678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.161 [2024-12-08 06:04:49.187693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:17:26.161 [2024-12-08 06:04:49.187705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.187986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.188003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.161 [2024-12-08 06:04:49.188015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:17:26.161 [2024-12-08 06:04:49.188026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.193979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.194203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.161 [2024-12-08 06:04:49.194246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.903 ms 00:17:26.161 [2024-12-08 06:04:49.194265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.161 [2024-12-08 06:04:49.196770] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:26.161 [2024-12-08 06:04:49.196815] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.161 [2024-12-08 06:04:49.196867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.161 [2024-12-08 06:04:49.196880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.161 [2024-12-08 06:04:49.196893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:17:26.161 [2024-12-08 06:04:49.196904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.212657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.212868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.421 [2024-12-08 06:04:49.212987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.692 ms 00:17:26.421 [2024-12-08 06:04:49.213036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.215180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.215247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.421 [2024-12-08 06:04:49.215279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:17:26.421 [2024-12-08 06:04:49.215290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.216872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.216908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.421 [2024-12-08 06:04:49.216946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:17:26.421 [2024-12-08 06:04:49.216957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.217334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.217364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.421 [2024-12-08 06:04:49.217376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:26.421 [2024-12-08 06:04:49.217387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.233138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.233240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.421 [2024-12-08 06:04:49.233277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.695 ms 00:17:26.421 [2024-12-08 06:04:49.233289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.240991] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.421 [2024-12-08 06:04:49.254301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.254361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.421 [2024-12-08 06:04:49.254396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.871 ms 00:17:26.421 [2024-12-08 06:04:49.254407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.254560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.254579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.421 [2024-12-08 06:04:49.254604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:26.421 [2024-12-08 06:04:49.254616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.254694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.254710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.421 [2024-12-08 06:04:49.254722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:26.421 [2024-12-08 06:04:49.254732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.254764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.254777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.421 [2024-12-08 06:04:49.254789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:26.421 [2024-12-08 06:04:49.254799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.254839] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.421 [2024-12-08 06:04:49.254855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.254873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.421 [2024-12-08 06:04:49.254884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:26.421 [2024-12-08 06:04:49.254895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.258420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.258459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.421 [2024-12-08 06:04:49.258491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.496 ms 00:17:26.421 [2024-12-08 06:04:49.258502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.258608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.421 [2024-12-08 06:04:49.258627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.421 [2024-12-08 06:04:49.258652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:26.421 [2024-12-08 06:04:49.258663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.421 [2024-12-08 06:04:49.259656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.421 [2024-12-08 06:04:49.260950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.442 ms, result 0 00:17:26.421 [2024-12-08 06:04:49.261709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.421 [2024-12-08 06:04:49.271149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.356  [2024-12-08T06:04:51.335Z] Copying: 22/256 [MB] (22 MBps) [2024-12-08T06:04:52.711Z] Copying: 46/256 [MB] (23 MBps) [2024-12-08T06:04:53.279Z] Copying: 69/256 [MB] (23 MBps) [2024-12-08T06:04:54.660Z] Copying: 92/256 [MB] (23 MBps) [2024-12-08T06:04:55.595Z] Copying: 115/256 [MB] (23 MBps) [2024-12-08T06:04:56.528Z] Copying: 138/256 [MB] (23 MBps) [2024-12-08T06:04:57.461Z] Copying: 161/256 [MB] (23 MBps) [2024-12-08T06:04:58.409Z] Copying: 185/256 [MB] (23 MBps) [2024-12-08T06:04:59.349Z] Copying: 207/256 [MB] (22 MBps) [2024-12-08T06:05:00.286Z] Copying: 230/256 [MB] (22 MBps) [2024-12-08T06:05:00.548Z] Copying: 253/256 [MB] (22 MBps) [2024-12-08T06:05:00.548Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-08 06:05:00.388344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.503 [2024-12-08 06:05:00.389529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.389558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:37.503 [2024-12-08 06:05:00.389577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.503 [2024-12-08 06:05:00.389595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.503 [2024-12-08 06:05:00.389627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:37.503 [2024-12-08 06:05:00.390053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.390087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:37.503 [2024-12-08 06:05:00.390113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:17:37.503 [2024-12-08 06:05:00.390126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.503 [2024-12-08 06:05:00.391882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.391922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:37.503 [2024-12-08 06:05:00.391949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:17:37.503 [2024-12-08 06:05:00.391959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.503 [2024-12-08 06:05:00.398984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.399033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:37.503 [2024-12-08 06:05:00.399048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:17:37.503 [2024-12-08 06:05:00.399059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.503 [2024-12-08 06:05:00.406778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.407007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:37.503 [2024-12-08 06:05:00.407034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.680 ms 00:17:37.503 [2024-12-08 06:05:00.407048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.503 [2024-12-08 06:05:00.408336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.408375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:37.503 [2024-12-08 06:05:00.408392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:17:37.503 [2024-12-08 06:05:00.408403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.503 [2024-12-08 06:05:00.411545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.503 [2024-12-08 06:05:00.411586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:37.503 [2024-12-08 06:05:00.411610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:17:37.503 [2024-12-08 06:05:00.411634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.504 [2024-12-08 06:05:00.411824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.504 [2024-12-08 06:05:00.411854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:37.504 [2024-12-08 06:05:00.411865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:37.504 [2024-12-08 06:05:00.411876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.504 [2024-12-08 06:05:00.413937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.504 [2024-12-08 06:05:00.413989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:37.504 [2024-12-08 06:05:00.414033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:17:37.504 [2024-12-08 06:05:00.414044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.504 [2024-12-08 06:05:00.415478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.504 [2024-12-08 06:05:00.415515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:37.504 [2024-12-08 06:05:00.415531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:17:37.504 [2024-12-08 06:05:00.415542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.504 [2024-12-08 06:05:00.416664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.504 [2024-12-08 06:05:00.416860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:37.504 [2024-12-08 06:05:00.416900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:17:37.504 [2024-12-08 06:05:00.416912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.504 [2024-12-08 06:05:00.417987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.504 [2024-12-08 06:05:00.418054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:37.504 [2024-12-08 06:05:00.418083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:17:37.504 [2024-12-08 06:05:00.418094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.504 [2024-12-08 06:05:00.418131] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:37.504 [2024-12-08 06:05:00.418159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:37.504 [2024-12-08 06:05:00.418821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.418992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:37.505 [2024-12-08 06:05:00.419446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:37.505 [2024-12-08 06:05:00.419471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:37.505 [2024-12-08 06:05:00.419485] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:37.505 [2024-12-08 06:05:00.419496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:37.505 [2024-12-08 06:05:00.419507] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:37.505 [2024-12-08 06:05:00.419518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:37.505 [2024-12-08 06:05:00.419529] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:37.505 [2024-12-08 06:05:00.419540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:37.505 [2024-12-08 06:05:00.419552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:37.505 [2024-12-08 06:05:00.419562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:37.505 [2024-12-08 06:05:00.419572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:37.505 [2024-12-08 06:05:00.419584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.505 [2024-12-08 06:05:00.419610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:37.505 [2024-12-08 06:05:00.419623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:17:37.505 [2024-12-08 06:05:00.419639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.505 [2024-12-08 06:05:00.421344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.505 [2024-12-08 06:05:00.421486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:37.505 [2024-12-08 06:05:00.421598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:17:37.505 [2024-12-08 06:05:00.421648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.505 [2024-12-08 06:05:00.421884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.505 [2024-12-08 06:05:00.422053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:37.505 [2024-12-08 06:05:00.422161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:37.505 [2024-12-08 06:05:00.422281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.505 [2024-12-08 06:05:00.427027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.505 [2024-12-08 06:05:00.427194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.505 [2024-12-08 06:05:00.427309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.427446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.427596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.427665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.506 [2024-12-08 06:05:00.427774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.427827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.428042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.428109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.506 [2024-12-08 06:05:00.428313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.428369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.428556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.428612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.506 [2024-12-08 06:05:00.428746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.428797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.437164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.437445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.506 [2024-12-08 06:05:00.437564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.437587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.444226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.444443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.506 [2024-12-08 06:05:00.444480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.444493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.444536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.444550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.506 [2024-12-08 06:05:00.444562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.444573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.444606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.444620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.506 [2024-12-08 06:05:00.444631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.444642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.444736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.444755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.506 [2024-12-08 06:05:00.444782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.444792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.444851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.444869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:37.506 [2024-12-08 06:05:00.444881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.444892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.444957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.444995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.506 [2024-12-08 06:05:00.445006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.445016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.445064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.506 [2024-12-08 06:05:00.445079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.506 [2024-12-08 06:05:00.445090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.506 [2024-12-08 06:05:00.445100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.506 [2024-12-08 06:05:00.445272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.724 ms, result 0 00:17:37.766 00:17:37.766 00:17:37.766 06:05:00 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86583 00:17:37.766 06:05:00 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:37.766 06:05:00 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86583 00:17:37.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.766 06:05:00 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86583 ']' 00:17:37.766 06:05:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.766 06:05:00 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.766 06:05:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.766 06:05:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.766 06:05:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:38.025 [2024-12-08 06:05:00.912596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:38.025 [2024-12-08 06:05:00.913061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86583 ] 00:17:38.025 [2024-12-08 06:05:01.061697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.285 [2024-12-08 06:05:01.098543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.853 06:05:01 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.853 06:05:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:38.853 06:05:01 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:39.114 [2024-12-08 06:05:02.108965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.114 [2024-12-08 06:05:02.109065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.375 [2024-12-08 06:05:02.277933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.375 [2024-12-08 06:05:02.278211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:39.375 [2024-12-08 06:05:02.278241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:39.375 [2024-12-08 06:05:02.278256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.375 [2024-12-08 06:05:02.280724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.375 [2024-12-08 06:05:02.280778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.375 [2024-12-08 06:05:02.280796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:17:39.375 [2024-12-08 06:05:02.280816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.375 [2024-12-08 06:05:02.280923] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:39.376 [2024-12-08 06:05:02.281256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:39.376 [2024-12-08 06:05:02.281296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.281312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.376 [2024-12-08 06:05:02.281333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:17:39.376 [2024-12-08 06:05:02.281346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.282664] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:39.376 [2024-12-08 06:05:02.284805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.284846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:39.376 [2024-12-08 06:05:02.284888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.138 ms 00:17:39.376 [2024-12-08 06:05:02.284899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.284967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.284985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:39.376 [2024-12-08 06:05:02.285010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:39.376 [2024-12-08 06:05:02.285021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.289425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.289463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.376 [2024-12-08 06:05:02.289496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:17:39.376 [2024-12-08 06:05:02.289507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.289687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.289707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.376 [2024-12-08 06:05:02.289730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:39.376 [2024-12-08 06:05:02.289741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.289778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.289801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:39.376 [2024-12-08 06:05:02.289815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:39.376 [2024-12-08 06:05:02.289832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.289867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:39.376 [2024-12-08 06:05:02.291193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.291275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.376 [2024-12-08 06:05:02.291308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:17:39.376 [2024-12-08 06:05:02.291336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.291396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.291424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:39.376 [2024-12-08 06:05:02.291436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:39.376 [2024-12-08 06:05:02.291448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.291506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:39.376 [2024-12-08 06:05:02.291536] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:39.376 [2024-12-08 06:05:02.291579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:39.376 [2024-12-08 06:05:02.291605] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:39.376 [2024-12-08 06:05:02.291732] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:39.376 [2024-12-08 06:05:02.291766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:39.376 [2024-12-08 06:05:02.291803] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:39.376 [2024-12-08 06:05:02.291835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:39.376 [2024-12-08 06:05:02.291847] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:39.376 [2024-12-08 06:05:02.291878] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:39.376 [2024-12-08 06:05:02.291895] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:39.376 [2024-12-08 06:05:02.291908] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:39.376 [2024-12-08 06:05:02.291918] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:39.376 [2024-12-08 06:05:02.291930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.291943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:39.376 [2024-12-08 06:05:02.291956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:39.376 [2024-12-08 06:05:02.291972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.292067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.376 [2024-12-08 06:05:02.292080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:39.376 [2024-12-08 06:05:02.292092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:39.376 [2024-12-08 06:05:02.292102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.376 [2024-12-08 06:05:02.292208] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:39.376 [2024-12-08 06:05:02.292229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:39.376 [2024-12-08 06:05:02.292247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.376 [2024-12-08 06:05:02.292258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.376 [2024-12-08 06:05:02.292463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:39.376 [2024-12-08 06:05:02.292524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:39.376 [2024-12-08 06:05:02.292567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:39.376 [2024-12-08 06:05:02.292605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:39.376 [2024-12-08 06:05:02.292644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:39.376 [2024-12-08 06:05:02.292797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.376 [2024-12-08 06:05:02.292892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:39.376 [2024-12-08 06:05:02.292970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:39.376 [2024-12-08 06:05:02.293010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.376 [2024-12-08 06:05:02.293047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:39.376 [2024-12-08 06:05:02.293086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:39.376 [2024-12-08 06:05:02.293122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:39.376 [2024-12-08 06:05:02.293455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:39.376 [2024-12-08 06:05:02.293513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:39.376 [2024-12-08 06:05:02.293598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.376 [2024-12-08 06:05:02.293791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:39.376 [2024-12-08 06:05:02.293814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.376 [2024-12-08 06:05:02.293837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:39.376 [2024-12-08 06:05:02.293849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.376 [2024-12-08 06:05:02.293873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:39.376 [2024-12-08 06:05:02.293883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.376 [2024-12-08 06:05:02.293904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:39.376 [2024-12-08 06:05:02.293916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:39.376 [2024-12-08 06:05:02.293926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.376 [2024-12-08 06:05:02.293938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:39.376 [2024-12-08 06:05:02.293947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:39.376 [2024-12-08 06:05:02.293961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.376 [2024-12-08 06:05:02.293971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:39.376 [2024-12-08 06:05:02.293983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:39.376 [2024-12-08 06:05:02.293993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.376 [2024-12-08 06:05:02.294004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:39.377 [2024-12-08 06:05:02.294014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:39.377 [2024-12-08 06:05:02.294025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.377 [2024-12-08 06:05:02.294036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:39.377 [2024-12-08 06:05:02.294048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:39.377 [2024-12-08 06:05:02.294059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.377 [2024-12-08 06:05:02.294082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.377 [2024-12-08 06:05:02.294096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:39.377 [2024-12-08 06:05:02.294108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:39.377 [2024-12-08 06:05:02.294118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:39.377 [2024-12-08 06:05:02.294129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:39.377 [2024-12-08 06:05:02.294139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:39.377 [2024-12-08 06:05:02.294169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:39.377 [2024-12-08 06:05:02.294181] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:39.377 [2024-12-08 06:05:02.294196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:39.377 [2024-12-08 06:05:02.294237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:39.377 [2024-12-08 06:05:02.294249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:39.377 [2024-12-08 06:05:02.294261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:39.377 [2024-12-08 06:05:02.294272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:39.377 [2024-12-08 06:05:02.294284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:39.377 [2024-12-08 06:05:02.294294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:39.377 [2024-12-08 06:05:02.294306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:39.377 [2024-12-08 06:05:02.294317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:39.377 [2024-12-08 06:05:02.294328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:39.377 [2024-12-08 06:05:02.294396] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:39.377 [2024-12-08 06:05:02.294410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:39.377 [2024-12-08 06:05:02.294435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:39.377 [2024-12-08 06:05:02.294446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:39.377 [2024-12-08 06:05:02.294458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:39.377 [2024-12-08 06:05:02.294471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.294488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:39.377 [2024-12-08 06:05:02.294500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:17:39.377 [2024-12-08 06:05:02.294512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.302822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.302893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.377 [2024-12-08 06:05:02.302911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.200 ms 00:17:39.377 [2024-12-08 06:05:02.302924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.303075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.303100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:39.377 [2024-12-08 06:05:02.303115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:39.377 [2024-12-08 06:05:02.303128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.311077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.311124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.377 [2024-12-08 06:05:02.311155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.921 ms 00:17:39.377 [2024-12-08 06:05:02.311167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.311287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.311312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.377 [2024-12-08 06:05:02.311340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.377 [2024-12-08 06:05:02.311371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.311711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.311814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.377 [2024-12-08 06:05:02.311853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:39.377 [2024-12-08 06:05:02.311880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.312062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.312091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.377 [2024-12-08 06:05:02.312106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:17:39.377 [2024-12-08 06:05:02.312119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.328041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.328111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.377 [2024-12-08 06:05:02.328129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.892 ms 00:17:39.377 [2024-12-08 06:05:02.328142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.330540] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:39.377 [2024-12-08 06:05:02.330584] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:39.377 [2024-12-08 06:05:02.330627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.330678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:39.377 [2024-12-08 06:05:02.330689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:17:39.377 [2024-12-08 06:05:02.330701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.345793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.345869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:39.377 [2024-12-08 06:05:02.345886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.042 ms 00:17:39.377 [2024-12-08 06:05:02.345901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.347990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.348185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:39.377 [2024-12-08 06:05:02.348222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:17:39.377 [2024-12-08 06:05:02.348237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.349908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.349949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:39.377 [2024-12-08 06:05:02.349965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:17:39.377 [2024-12-08 06:05:02.349978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.350482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.350510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:39.377 [2024-12-08 06:05:02.350524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:17:39.377 [2024-12-08 06:05:02.350537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.366949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.367030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:39.377 [2024-12-08 06:05:02.367050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:17:39.377 [2024-12-08 06:05:02.367065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.377 [2024-12-08 06:05:02.374887] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:39.377 [2024-12-08 06:05:02.387380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.377 [2024-12-08 06:05:02.387704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:39.377 [2024-12-08 06:05:02.387744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.210 ms 00:17:39.378 [2024-12-08 06:05:02.387768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.387952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.378 [2024-12-08 06:05:02.387980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:39.378 [2024-12-08 06:05:02.387996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.378 [2024-12-08 06:05:02.388019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.388096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.378 [2024-12-08 06:05:02.388113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:39.378 [2024-12-08 06:05:02.388130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:39.378 [2024-12-08 06:05:02.388140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.388179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.378 [2024-12-08 06:05:02.388195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:39.378 [2024-12-08 06:05:02.388210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:39.378 [2024-12-08 06:05:02.388220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.388307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:39.378 [2024-12-08 06:05:02.388326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.378 [2024-12-08 06:05:02.388338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:39.378 [2024-12-08 06:05:02.388350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:39.378 [2024-12-08 06:05:02.388362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.391916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.378 [2024-12-08 06:05:02.391982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:39.378 [2024-12-08 06:05:02.391999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:17:39.378 [2024-12-08 06:05:02.392020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.392128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.378 [2024-12-08 06:05:02.392151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:39.378 [2024-12-08 06:05:02.392163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:39.378 [2024-12-08 06:05:02.392174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.378 [2024-12-08 06:05:02.393490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.378 [2024-12-08 06:05:02.394684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.020 ms, result 0 00:17:39.378 [2024-12-08 06:05:02.395861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.378 Some configs were skipped because the RPC state that can call them passed over. 00:17:39.637 06:05:02 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:39.638 [2024-12-08 06:05:02.654810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.638 [2024-12-08 06:05:02.654867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:39.638 [2024-12-08 06:05:02.654891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:17:39.638 [2024-12-08 06:05:02.654904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.638 [2024-12-08 06:05:02.654956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.703 ms, result 0 00:17:39.638 true 00:17:39.638 06:05:02 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:39.897 [2024-12-08 06:05:02.926928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.897 [2024-12-08 06:05:02.927002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:39.897 [2024-12-08 06:05:02.927022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:17:39.897 [2024-12-08 06:05:02.927063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.897 [2024-12-08 06:05:02.927127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.598 ms, result 0 00:17:39.897 true 00:17:40.173 06:05:02 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86583 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86583 ']' 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86583 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86583 00:17:40.173 killing process with pid 86583 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86583' 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86583 00:17:40.173 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86583 00:17:40.173 [2024-12-08 06:05:03.080667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.173 [2024-12-08 06:05:03.080734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:40.173 [2024-12-08 06:05:03.080771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:40.173 [2024-12-08 06:05:03.080782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.173 [2024-12-08 06:05:03.080817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:40.173 [2024-12-08 06:05:03.081247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.173 [2024-12-08 06:05:03.081269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:40.173 [2024-12-08 06:05:03.081281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:40.173 [2024-12-08 06:05:03.081304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.173 [2024-12-08 06:05:03.081568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.173 [2024-12-08 06:05:03.081588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:40.173 [2024-12-08 06:05:03.081599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:17:40.173 [2024-12-08 06:05:03.081610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.173 [2024-12-08 06:05:03.085383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.173 [2024-12-08 06:05:03.085446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:40.173 [2024-12-08 06:05:03.085463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:17:40.173 [2024-12-08 06:05:03.085479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.173 [2024-12-08 06:05:03.092043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.173 [2024-12-08 06:05:03.092297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:40.174 [2024-12-08 06:05:03.092324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:17:40.174 [2024-12-08 06:05:03.092341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.093831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.093890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:40.174 [2024-12-08 06:05:03.093906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:17:40.174 [2024-12-08 06:05:03.093918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.096970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.097029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:40.174 [2024-12-08 06:05:03.097051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:17:40.174 [2024-12-08 06:05:03.097064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.097239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.097263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:40.174 [2024-12-08 06:05:03.097276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:40.174 [2024-12-08 06:05:03.097289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.099175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.099272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:40.174 [2024-12-08 06:05:03.099288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:17:40.174 [2024-12-08 06:05:03.099307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.100733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.100787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:40.174 [2024-12-08 06:05:03.100816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:17:40.174 [2024-12-08 06:05:03.100828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.101965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.102019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:40.174 [2024-12-08 06:05:03.102033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:17:40.174 [2024-12-08 06:05:03.102044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.103219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.174 [2024-12-08 06:05:03.103486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:40.174 [2024-12-08 06:05:03.103530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:17:40.174 [2024-12-08 06:05:03.103544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.174 [2024-12-08 06:05:03.103595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:40.174 [2024-12-08 06:05:03.103623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.103992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:40.174 [2024-12-08 06:05:03.104164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:40.175 [2024-12-08 06:05:03.104972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:40.176 [2024-12-08 06:05:03.104983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:40.176 [2024-12-08 06:05:03.105007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:40.176 [2024-12-08 06:05:03.105018] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:40.176 [2024-12-08 06:05:03.105031] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:40.176 [2024-12-08 06:05:03.105040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:40.176 [2024-12-08 06:05:03.105054] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:40.176 [2024-12-08 06:05:03.105064] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:40.176 [2024-12-08 06:05:03.105084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:40.176 [2024-12-08 06:05:03.105095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:40.176 [2024-12-08 06:05:03.105109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:40.176 [2024-12-08 06:05:03.105118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:40.176 [2024-12-08 06:05:03.105130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:40.176 [2024-12-08 06:05:03.105141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.176 [2024-12-08 06:05:03.105156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:40.176 [2024-12-08 06:05:03.105168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:17:40.176 [2024-12-08 06:05:03.105217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.106631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.176 [2024-12-08 06:05:03.106781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:40.176 [2024-12-08 06:05:03.106806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:17:40.176 [2024-12-08 06:05:03.106824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.106929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.176 [2024-12-08 06:05:03.106954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:40.176 [2024-12-08 06:05:03.106968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:40.176 [2024-12-08 06:05:03.106983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.112115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.112176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.176 [2024-12-08 06:05:03.112218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.112236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.112331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.112357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.176 [2024-12-08 06:05:03.112370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.112400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.112454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.112487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.176 [2024-12-08 06:05:03.112499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.112531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.112556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.112586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.176 [2024-12-08 06:05:03.112597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.112609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.120692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.120769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.176 [2024-12-08 06:05:03.120786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.120798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.127055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.127126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.176 [2024-12-08 06:05:03.127143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.127167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.127271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.127304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.176 [2024-12-08 06:05:03.127334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.127346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.127382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.127416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.176 [2024-12-08 06:05:03.127427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.127453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.127602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.127634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.176 [2024-12-08 06:05:03.127649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.127689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.127747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.127789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:40.176 [2024-12-08 06:05:03.127803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.127833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.127909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.127949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.176 [2024-12-08 06:05:03.127977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.128014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.128071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.176 [2024-12-08 06:05:03.128097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.176 [2024-12-08 06:05:03.128111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.176 [2024-12-08 06:05:03.128128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.176 [2024-12-08 06:05:03.128343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.614 ms, result 0 00:17:40.436 06:05:03 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:40.436 06:05:03 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.436 [2024-12-08 06:05:03.472918] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:40.436 [2024-12-08 06:05:03.473381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86619 ] 00:17:40.695 [2024-12-08 06:05:03.616634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.695 [2024-12-08 06:05:03.656856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.956 [2024-12-08 06:05:03.755496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.956 [2024-12-08 06:05:03.755875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.956 [2024-12-08 06:05:03.918854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.919109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.956 [2024-12-08 06:05:03.919138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.956 [2024-12-08 06:05:03.919162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.921733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.921778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.956 [2024-12-08 06:05:03.921809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:17:40.956 [2024-12-08 06:05:03.921823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.921933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.956 [2024-12-08 06:05:03.922376] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.956 [2024-12-08 06:05:03.922469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.922514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.956 [2024-12-08 06:05:03.922670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:17:40.956 [2024-12-08 06:05:03.922707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.924061] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.956 [2024-12-08 06:05:03.926347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.926384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.956 [2024-12-08 06:05:03.926419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.288 ms 00:17:40.956 [2024-12-08 06:05:03.926432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.926500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.926517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.956 [2024-12-08 06:05:03.926528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:40.956 [2024-12-08 06:05:03.926538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.931137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.931221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.956 [2024-12-08 06:05:03.931277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:17:40.956 [2024-12-08 06:05:03.931288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.931439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.931495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.956 [2024-12-08 06:05:03.931509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:40.956 [2024-12-08 06:05:03.931530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.931569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.931584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.956 [2024-12-08 06:05:03.931601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.956 [2024-12-08 06:05:03.931621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.931660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.956 [2024-12-08 06:05:03.933000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.933036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.956 [2024-12-08 06:05:03.933065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:17:40.956 [2024-12-08 06:05:03.933075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.933118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.933132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.956 [2024-12-08 06:05:03.933148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:40.956 [2024-12-08 06:05:03.933160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.933182] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.956 [2024-12-08 06:05:03.933234] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.956 [2024-12-08 06:05:03.933299] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.956 [2024-12-08 06:05:03.933322] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.956 [2024-12-08 06:05:03.933439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.956 [2024-12-08 06:05:03.933458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.956 [2024-12-08 06:05:03.933472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.956 [2024-12-08 06:05:03.933494] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.956 [2024-12-08 06:05:03.933507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.956 [2024-12-08 06:05:03.933518] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.956 [2024-12-08 06:05:03.933544] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.956 [2024-12-08 06:05:03.933555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.956 [2024-12-08 06:05:03.933596] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.956 [2024-12-08 06:05:03.933607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.933618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.956 [2024-12-08 06:05:03.933635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:17:40.956 [2024-12-08 06:05:03.933647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.933746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.956 [2024-12-08 06:05:03.933760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.956 [2024-12-08 06:05:03.933771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:40.956 [2024-12-08 06:05:03.933781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.956 [2024-12-08 06:05:03.933889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.956 [2024-12-08 06:05:03.933915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.956 [2024-12-08 06:05:03.933927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.956 [2024-12-08 06:05:03.933938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.956 [2024-12-08 06:05:03.933953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.956 [2024-12-08 06:05:03.933963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.956 [2024-12-08 06:05:03.933973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.956 [2024-12-08 06:05:03.933983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.956 [2024-12-08 06:05:03.933993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.956 [2024-12-08 06:05:03.934006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.956 [2024-12-08 06:05:03.934016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.956 [2024-12-08 06:05:03.934026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.956 [2024-12-08 06:05:03.934035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.956 [2024-12-08 06:05:03.934045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.956 [2024-12-08 06:05:03.934055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.956 [2024-12-08 06:05:03.934064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.956 [2024-12-08 06:05:03.934074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.956 [2024-12-08 06:05:03.934085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.957 [2024-12-08 06:05:03.934114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.957 [2024-12-08 06:05:03.934143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.957 [2024-12-08 06:05:03.934178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.957 [2024-12-08 06:05:03.934206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.957 [2024-12-08 06:05:03.934234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.957 [2024-12-08 06:05:03.934277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.957 [2024-12-08 06:05:03.934291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.957 [2024-12-08 06:05:03.934300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.957 [2024-12-08 06:05:03.934310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.957 [2024-12-08 06:05:03.934320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.957 [2024-12-08 06:05:03.934329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.957 [2024-12-08 06:05:03.934354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.957 [2024-12-08 06:05:03.934365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934374] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.957 [2024-12-08 06:05:03.934385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.957 [2024-12-08 06:05:03.934395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.957 [2024-12-08 06:05:03.934426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.957 [2024-12-08 06:05:03.934436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.957 [2024-12-08 06:05:03.934446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.957 [2024-12-08 06:05:03.934456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.957 [2024-12-08 06:05:03.934465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.957 [2024-12-08 06:05:03.934475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.957 [2024-12-08 06:05:03.934486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.957 [2024-12-08 06:05:03.934499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.957 [2024-12-08 06:05:03.934522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.957 [2024-12-08 06:05:03.934535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.957 [2024-12-08 06:05:03.934546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.957 [2024-12-08 06:05:03.934557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.957 [2024-12-08 06:05:03.934567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.957 [2024-12-08 06:05:03.934578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.957 [2024-12-08 06:05:03.934599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.957 [2024-12-08 06:05:03.934609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.957 [2024-12-08 06:05:03.934620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.957 [2024-12-08 06:05:03.934672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.957 [2024-12-08 06:05:03.934684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.957 [2024-12-08 06:05:03.934706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.957 [2024-12-08 06:05:03.934719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.957 [2024-12-08 06:05:03.934731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.957 [2024-12-08 06:05:03.934743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.934754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.957 [2024-12-08 06:05:03.934768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:17:40.957 [2024-12-08 06:05:03.934779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.953530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.953829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.957 [2024-12-08 06:05:03.953874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.667 ms 00:17:40.957 [2024-12-08 06:05:03.953892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.954131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.954159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.957 [2024-12-08 06:05:03.954176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:40.957 [2024-12-08 06:05:03.954229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.964099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.964139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.957 [2024-12-08 06:05:03.964170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.813 ms 00:17:40.957 [2024-12-08 06:05:03.964180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.964339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.964364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.957 [2024-12-08 06:05:03.964379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.957 [2024-12-08 06:05:03.964399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.964771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.964796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.957 [2024-12-08 06:05:03.964808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:40.957 [2024-12-08 06:05:03.964819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.964971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.964994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.957 [2024-12-08 06:05:03.965028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:40.957 [2024-12-08 06:05:03.965038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.969829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.969866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.957 [2024-12-08 06:05:03.969896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.761 ms 00:17:40.957 [2024-12-08 06:05:03.969906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.972207] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:40.957 [2024-12-08 06:05:03.972463] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.957 [2024-12-08 06:05:03.972489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.972501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.957 [2024-12-08 06:05:03.972514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:17:40.957 [2024-12-08 06:05:03.972525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.988461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-12-08 06:05:03.988522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.957 [2024-12-08 06:05:03.988555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:17:40.957 [2024-12-08 06:05:03.988567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-12-08 06:05:03.990532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-12-08 06:05:03.990574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.958 [2024-12-08 06:05:03.990590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:17:40.958 [2024-12-08 06:05:03.990601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-12-08 06:05:03.992215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-12-08 06:05:03.992252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.958 [2024-12-08 06:05:03.992282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:17:40.958 [2024-12-08 06:05:03.992301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-12-08 06:05:03.992691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-12-08 06:05:03.992716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.958 [2024-12-08 06:05:03.992730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:40.958 [2024-12-08 06:05:03.992745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.009162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.009270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.217 [2024-12-08 06:05:04.009309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.369 ms 00:17:41.217 [2024-12-08 06:05:04.009335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.016998] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.217 [2024-12-08 06:05:04.030159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.030244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.217 [2024-12-08 06:05:04.030279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.715 ms 00:17:41.217 [2024-12-08 06:05:04.030290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.030477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.030496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.217 [2024-12-08 06:05:04.030509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.217 [2024-12-08 06:05:04.030563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.030638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.030660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.217 [2024-12-08 06:05:04.030699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:41.217 [2024-12-08 06:05:04.030710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.030744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.030757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.217 [2024-12-08 06:05:04.030769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.217 [2024-12-08 06:05:04.030779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.030819] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.217 [2024-12-08 06:05:04.030835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.030852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.217 [2024-12-08 06:05:04.030863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:41.217 [2024-12-08 06:05:04.030883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.034787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.034827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.217 [2024-12-08 06:05:04.034859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.873 ms 00:17:41.217 [2024-12-08 06:05:04.034869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.035024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.217 [2024-12-08 06:05:04.035044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.217 [2024-12-08 06:05:04.035059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:41.217 [2024-12-08 06:05:04.035070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.217 [2024-12-08 06:05:04.036239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.217 [2024-12-08 06:05:04.037481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.984 ms, result 0 00:17:41.217 [2024-12-08 06:05:04.038336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.217 [2024-12-08 06:05:04.047826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.151  [2024-12-08T06:05:06.133Z] Copying: 25/256 [MB] (25 MBps) [2024-12-08T06:05:07.068Z] Copying: 46/256 [MB] (21 MBps) [2024-12-08T06:05:08.447Z] Copying: 69/256 [MB] (22 MBps) [2024-12-08T06:05:09.384Z] Copying: 91/256 [MB] (22 MBps) [2024-12-08T06:05:10.322Z] Copying: 113/256 [MB] (22 MBps) [2024-12-08T06:05:11.261Z] Copying: 136/256 [MB] (22 MBps) [2024-12-08T06:05:12.195Z] Copying: 158/256 [MB] (22 MBps) [2024-12-08T06:05:13.143Z] Copying: 180/256 [MB] (22 MBps) [2024-12-08T06:05:14.079Z] Copying: 202/256 [MB] (22 MBps) [2024-12-08T06:05:15.454Z] Copying: 225/256 [MB] (22 MBps) [2024-12-08T06:05:15.454Z] Copying: 247/256 [MB] (22 MBps) [2024-12-08T06:05:15.454Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-08 06:05:15.430408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.409 [2024-12-08 06:05:15.431646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.431799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.409 [2024-12-08 06:05:15.431945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:52.409 [2024-12-08 06:05:15.431975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.432009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.409 [2024-12-08 06:05:15.432409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.432436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.409 [2024-12-08 06:05:15.432448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:17:52.409 [2024-12-08 06:05:15.432458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.432723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.432739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.409 [2024-12-08 06:05:15.432750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:52.409 [2024-12-08 06:05:15.432760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.436324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.436358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.409 [2024-12-08 06:05:15.436371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.518 ms 00:17:52.409 [2024-12-08 06:05:15.436381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.442935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.442963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.409 [2024-12-08 06:05:15.442991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.533 ms 00:17:52.409 [2024-12-08 06:05:15.443000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.444468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.444504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.409 [2024-12-08 06:05:15.444533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:17:52.409 [2024-12-08 06:05:15.444543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.447923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.447981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.409 [2024-12-08 06:05:15.448017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:17:52.409 [2024-12-08 06:05:15.448036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.448151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.448168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.409 [2024-12-08 06:05:15.448224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:52.409 [2024-12-08 06:05:15.448235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.450396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.450433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.409 [2024-12-08 06:05:15.450462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:17:52.409 [2024-12-08 06:05:15.450472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.451960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.409 [2024-12-08 06:05:15.452011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.409 [2024-12-08 06:05:15.452024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:17:52.409 [2024-12-08 06:05:15.452034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-12-08 06:05:15.453070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.668 [2024-12-08 06:05:15.453323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.668 [2024-12-08 06:05:15.453348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:17:52.668 [2024-12-08 06:05:15.453359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.668 [2024-12-08 06:05:15.454580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.668 [2024-12-08 06:05:15.454630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.668 [2024-12-08 06:05:15.454661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:17:52.668 [2024-12-08 06:05:15.454685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.668 [2024-12-08 06:05:15.454708] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.668 [2024-12-08 06:05:15.454733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.454978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.668 [2024-12-08 06:05:15.455122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.455977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.456003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.456014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.456025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.669 [2024-12-08 06:05:15.456043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.669 [2024-12-08 06:05:15.456054] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:52.669 [2024-12-08 06:05:15.456065] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.669 [2024-12-08 06:05:15.456074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.669 [2024-12-08 06:05:15.456084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.669 [2024-12-08 06:05:15.456094] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.669 [2024-12-08 06:05:15.456103] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.669 [2024-12-08 06:05:15.456114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.669 [2024-12-08 06:05:15.456124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.669 [2024-12-08 06:05:15.456133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.669 [2024-12-08 06:05:15.456142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.669 [2024-12-08 06:05:15.456153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.669 [2024-12-08 06:05:15.456163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.669 [2024-12-08 06:05:15.456194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:17:52.669 [2024-12-08 06:05:15.456219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.669 [2024-12-08 06:05:15.457683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.669 [2024-12-08 06:05:15.457739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.669 [2024-12-08 06:05:15.457751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:17:52.669 [2024-12-08 06:05:15.457762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.669 [2024-12-08 06:05:15.457868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.669 [2024-12-08 06:05:15.457886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.669 [2024-12-08 06:05:15.457897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:52.669 [2024-12-08 06:05:15.457914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.669 [2024-12-08 06:05:15.462570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.669 [2024-12-08 06:05:15.462788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.669 [2024-12-08 06:05:15.462910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.669 [2024-12-08 06:05:15.462957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.669 [2024-12-08 06:05:15.463045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.669 [2024-12-08 06:05:15.463144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.669 [2024-12-08 06:05:15.463225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.669 [2024-12-08 06:05:15.463272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.669 [2024-12-08 06:05:15.463378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.669 [2024-12-08 06:05:15.463567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.669 [2024-12-08 06:05:15.463622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.669 [2024-12-08 06:05:15.463659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.669 [2024-12-08 06:05:15.463809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.669 [2024-12-08 06:05:15.463901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.669 [2024-12-08 06:05:15.464047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.464096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.472080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.472373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.670 [2024-12-08 06:05:15.472493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.472541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.478964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.479156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.670 [2024-12-08 06:05:15.479313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.479360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.479562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.479643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.670 [2024-12-08 06:05:15.479803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.479868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.479946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.480034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.670 [2024-12-08 06:05:15.480074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.480117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.480262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.480322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.670 [2024-12-08 06:05:15.480363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.480412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.480563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.480718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.670 [2024-12-08 06:05:15.480826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.480950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.481055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.481134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.670 [2024-12-08 06:05:15.481152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.481173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.481257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.670 [2024-12-08 06:05:15.481290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.670 [2024-12-08 06:05:15.481309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.670 [2024-12-08 06:05:15.481339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-12-08 06:05:15.481496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.822 ms, result 0 00:17:52.670 00:17:52.670 00:17:52.670 06:05:15 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:52.670 06:05:15 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:53.237 06:05:16 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:53.495 [2024-12-08 06:05:16.368565] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:53.495 [2024-12-08 06:05:16.368787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86758 ] 00:17:53.495 [2024-12-08 06:05:16.515747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.781 [2024-12-08 06:05:16.549951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.781 [2024-12-08 06:05:16.631912] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:53.781 [2024-12-08 06:05:16.632005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:53.781 [2024-12-08 06:05:16.789489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.789551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:53.781 [2024-12-08 06:05:16.789586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:53.781 [2024-12-08 06:05:16.789597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.792333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.792544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.781 [2024-12-08 06:05:16.792602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:17:53.781 [2024-12-08 06:05:16.792621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.792819] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:53.781 [2024-12-08 06:05:16.793107] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:53.781 [2024-12-08 06:05:16.793147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.793160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.781 [2024-12-08 06:05:16.793190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:17:53.781 [2024-12-08 06:05:16.793215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.794555] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:53.781 [2024-12-08 06:05:16.796683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.796722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:53.781 [2024-12-08 06:05:16.796766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:17:53.781 [2024-12-08 06:05:16.796780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.796855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.796874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:53.781 [2024-12-08 06:05:16.796885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:53.781 [2024-12-08 06:05:16.796896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.801263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.801302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.781 [2024-12-08 06:05:16.801332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:17:53.781 [2024-12-08 06:05:16.801353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.801491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.801514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.781 [2024-12-08 06:05:16.801535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:53.781 [2024-12-08 06:05:16.801546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.801583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.801596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:53.781 [2024-12-08 06:05:16.801613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:53.781 [2024-12-08 06:05:16.801623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.801651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:53.781 [2024-12-08 06:05:16.802940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.802977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.781 [2024-12-08 06:05:16.803002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:17:53.781 [2024-12-08 06:05:16.803013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.803058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.781 [2024-12-08 06:05:16.803073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:53.781 [2024-12-08 06:05:16.803091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:53.781 [2024-12-08 06:05:16.803104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.781 [2024-12-08 06:05:16.803137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:53.781 [2024-12-08 06:05:16.803178] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:53.781 [2024-12-08 06:05:16.803264] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:53.781 [2024-12-08 06:05:16.803290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:53.781 [2024-12-08 06:05:16.803439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:53.781 [2024-12-08 06:05:16.803465] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:53.782 [2024-12-08 06:05:16.803506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:53.782 [2024-12-08 06:05:16.803522] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:53.782 [2024-12-08 06:05:16.803537] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:53.782 [2024-12-08 06:05:16.803549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:53.782 [2024-12-08 06:05:16.803561] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:53.782 [2024-12-08 06:05:16.803583] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:53.782 [2024-12-08 06:05:16.803594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:53.782 [2024-12-08 06:05:16.803607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.782 [2024-12-08 06:05:16.803619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:53.782 [2024-12-08 06:05:16.803638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:17:53.782 [2024-12-08 06:05:16.803650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.782 [2024-12-08 06:05:16.803763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.782 [2024-12-08 06:05:16.803778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:53.782 [2024-12-08 06:05:16.803820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:53.782 [2024-12-08 06:05:16.803843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.782 [2024-12-08 06:05:16.803970] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:53.782 [2024-12-08 06:05:16.803984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:53.782 [2024-12-08 06:05:16.803995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:53.782 [2024-12-08 06:05:16.804029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:53.782 [2024-12-08 06:05:16.804058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.782 [2024-12-08 06:05:16.804079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:53.782 [2024-12-08 06:05:16.804088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:53.782 [2024-12-08 06:05:16.804096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.782 [2024-12-08 06:05:16.804106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:53.782 [2024-12-08 06:05:16.804115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:53.782 [2024-12-08 06:05:16.804123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:53.782 [2024-12-08 06:05:16.804142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:53.782 [2024-12-08 06:05:16.804169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:53.782 [2024-12-08 06:05:16.804212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:53.782 [2024-12-08 06:05:16.804246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:53.782 [2024-12-08 06:05:16.804547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.782 [2024-12-08 06:05:16.804639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:53.782 [2024-12-08 06:05:16.804674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:53.782 [2024-12-08 06:05:16.804795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.782 [2024-12-08 06:05:16.804841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:53.782 [2024-12-08 06:05:16.804877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:53.782 [2024-12-08 06:05:16.804911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.782 [2024-12-08 06:05:16.805047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:53.782 [2024-12-08 06:05:16.805085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:53.782 [2024-12-08 06:05:16.805136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.805291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:53.782 [2024-12-08 06:05:16.805431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:53.782 [2024-12-08 06:05:16.805557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.805682] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:53.782 [2024-12-08 06:05:16.805746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:53.782 [2024-12-08 06:05:16.805858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.782 [2024-12-08 06:05:16.805960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.782 [2024-12-08 06:05:16.806020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:53.782 [2024-12-08 06:05:16.806110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:53.782 [2024-12-08 06:05:16.806155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:53.782 [2024-12-08 06:05:16.806254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:53.782 [2024-12-08 06:05:16.806301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:53.782 [2024-12-08 06:05:16.806455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:53.782 [2024-12-08 06:05:16.806478] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:53.782 [2024-12-08 06:05:16.806494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.782 [2024-12-08 06:05:16.806506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:53.782 [2024-12-08 06:05:16.806517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:53.782 [2024-12-08 06:05:16.806533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:53.782 [2024-12-08 06:05:16.806545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:53.782 [2024-12-08 06:05:16.806556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:53.782 [2024-12-08 06:05:16.806567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:53.782 [2024-12-08 06:05:16.806578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:53.782 [2024-12-08 06:05:16.806602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:53.782 [2024-12-08 06:05:16.806613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:53.782 [2024-12-08 06:05:16.806624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:53.782 [2024-12-08 06:05:16.806635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:53.782 [2024-12-08 06:05:16.806645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:53.782 [2024-12-08 06:05:16.806656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:53.782 [2024-12-08 06:05:16.806667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:53.783 [2024-12-08 06:05:16.806677] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:53.783 [2024-12-08 06:05:16.806689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.783 [2024-12-08 06:05:16.806701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:53.783 [2024-12-08 06:05:16.806711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:53.783 [2024-12-08 06:05:16.806740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:53.783 [2024-12-08 06:05:16.806752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:53.783 [2024-12-08 06:05:16.806764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.783 [2024-12-08 06:05:16.806775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:53.783 [2024-12-08 06:05:16.806790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:17:53.783 [2024-12-08 06:05:16.806801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.826171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.826234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.041 [2024-12-08 06:05:16.826269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.239 ms 00:17:54.041 [2024-12-08 06:05:16.826280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.826452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.826471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.041 [2024-12-08 06:05:16.826497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:54.041 [2024-12-08 06:05:16.826512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.834420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.834464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.041 [2024-12-08 06:05:16.834496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.878 ms 00:17:54.041 [2024-12-08 06:05:16.834506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.834573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.834609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.041 [2024-12-08 06:05:16.834625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.041 [2024-12-08 06:05:16.834635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.834964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.834981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.041 [2024-12-08 06:05:16.834992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:54.041 [2024-12-08 06:05:16.835003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.835150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.835166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.041 [2024-12-08 06:05:16.835178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:54.041 [2024-12-08 06:05:16.835188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.840088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.840124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.041 [2024-12-08 06:05:16.840155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.870 ms 00:17:54.041 [2024-12-08 06:05:16.840165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.842477] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:54.041 [2024-12-08 06:05:16.842521] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.041 [2024-12-08 06:05:16.842554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.842565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.041 [2024-12-08 06:05:16.842576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:17:54.041 [2024-12-08 06:05:16.842586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.857067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.857105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.041 [2024-12-08 06:05:16.857137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.431 ms 00:17:54.041 [2024-12-08 06:05:16.857160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.041 [2024-12-08 06:05:16.859090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.041 [2024-12-08 06:05:16.859126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.042 [2024-12-08 06:05:16.859156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:17:54.042 [2024-12-08 06:05:16.859165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.860921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.860958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.042 [2024-12-08 06:05:16.860989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:17:54.042 [2024-12-08 06:05:16.861006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.861363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.861383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.042 [2024-12-08 06:05:16.861395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:54.042 [2024-12-08 06:05:16.861410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.876778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.876848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.042 [2024-12-08 06:05:16.876883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.323 ms 00:17:54.042 [2024-12-08 06:05:16.876894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.884593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.042 [2024-12-08 06:05:16.897081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.897135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.042 [2024-12-08 06:05:16.897168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.082 ms 00:17:54.042 [2024-12-08 06:05:16.897178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.897369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.897399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.042 [2024-12-08 06:05:16.897420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.042 [2024-12-08 06:05:16.897441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.897509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.897534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.042 [2024-12-08 06:05:16.897546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:54.042 [2024-12-08 06:05:16.897556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.897667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.897690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.042 [2024-12-08 06:05:16.897703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:54.042 [2024-12-08 06:05:16.897727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.897771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.042 [2024-12-08 06:05:16.897788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.897804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.042 [2024-12-08 06:05:16.897824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:54.042 [2024-12-08 06:05:16.897837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.901545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.901584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.042 [2024-12-08 06:05:16.901629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:17:54.042 [2024-12-08 06:05:16.901639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.901741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.042 [2024-12-08 06:05:16.901759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.042 [2024-12-08 06:05:16.901783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:54.042 [2024-12-08 06:05:16.901794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.042 [2024-12-08 06:05:16.902854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.042 [2024-12-08 06:05:16.904228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.958 ms, result 0 00:17:54.042 [2024-12-08 06:05:16.905306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.042 [2024-12-08 06:05:16.914610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.303  [2024-12-08T06:05:17.348Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-12-08 06:05:17.091145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.303 [2024-12-08 06:05:17.092309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.092384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:54.303 [2024-12-08 06:05:17.092404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.303 [2024-12-08 06:05:17.092422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.092454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:54.303 [2024-12-08 06:05:17.092895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.092917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:54.303 [2024-12-08 06:05:17.092930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:54.303 [2024-12-08 06:05:17.092941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.094551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.094762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:54.303 [2024-12-08 06:05:17.094788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:17:54.303 [2024-12-08 06:05:17.094801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.099024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.099078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:54.303 [2024-12-08 06:05:17.099096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:17:54.303 [2024-12-08 06:05:17.099108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.106667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.106716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:54.303 [2024-12-08 06:05:17.106730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.504 ms 00:17:54.303 [2024-12-08 06:05:17.106741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.108057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.108111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:54.303 [2024-12-08 06:05:17.108126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:17:54.303 [2024-12-08 06:05:17.108136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.111234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.111286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:54.303 [2024-12-08 06:05:17.111323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:17:54.303 [2024-12-08 06:05:17.111352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.111505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.111525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:54.303 [2024-12-08 06:05:17.111550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:54.303 [2024-12-08 06:05:17.111562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.113359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.113395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:54.303 [2024-12-08 06:05:17.113409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:17:54.303 [2024-12-08 06:05:17.113419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.114785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.114836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:54.303 [2024-12-08 06:05:17.114850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:17:54.303 [2024-12-08 06:05:17.114860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.115936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.115988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:54.303 [2024-12-08 06:05:17.116002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:17:54.303 [2024-12-08 06:05:17.116012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.117055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.303 [2024-12-08 06:05:17.117260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:54.303 [2024-12-08 06:05:17.117286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:17:54.303 [2024-12-08 06:05:17.117298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.303 [2024-12-08 06:05:17.117344] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:54.303 [2024-12-08 06:05:17.117379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:54.303 [2024-12-08 06:05:17.117845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.117995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:54.304 [2024-12-08 06:05:17.118623] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:54.304 [2024-12-08 06:05:17.118634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:54.304 [2024-12-08 06:05:17.118646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:54.304 [2024-12-08 06:05:17.118657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:54.304 [2024-12-08 06:05:17.118668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:54.304 [2024-12-08 06:05:17.118679] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:54.304 [2024-12-08 06:05:17.118689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:54.304 [2024-12-08 06:05:17.118700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:54.304 [2024-12-08 06:05:17.118723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:54.304 [2024-12-08 06:05:17.118734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:54.304 [2024-12-08 06:05:17.118744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:54.304 [2024-12-08 06:05:17.118755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.304 [2024-12-08 06:05:17.118766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:54.304 [2024-12-08 06:05:17.118783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:17:54.304 [2024-12-08 06:05:17.118795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.304 [2024-12-08 06:05:17.120213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.304 [2024-12-08 06:05:17.120251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:54.304 [2024-12-08 06:05:17.120265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:17:54.304 [2024-12-08 06:05:17.120275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.304 [2024-12-08 06:05:17.120369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.304 [2024-12-08 06:05:17.120390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:54.304 [2024-12-08 06:05:17.120410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:54.304 [2024-12-08 06:05:17.120420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.304 [2024-12-08 06:05:17.125070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.304 [2024-12-08 06:05:17.125265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.304 [2024-12-08 06:05:17.125389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.304 [2024-12-08 06:05:17.125504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.304 [2024-12-08 06:05:17.125617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.304 [2024-12-08 06:05:17.125707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.304 [2024-12-08 06:05:17.125808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.304 [2024-12-08 06:05:17.125855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.304 [2024-12-08 06:05:17.126095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.304 [2024-12-08 06:05:17.126265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.305 [2024-12-08 06:05:17.126372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.126420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.126455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.126471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.305 [2024-12-08 06:05:17.126491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.126502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.134667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.134728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.305 [2024-12-08 06:05:17.134744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.134754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.305 [2024-12-08 06:05:17.141194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.305 [2024-12-08 06:05:17.141287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.305 [2024-12-08 06:05:17.141370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.305 [2024-12-08 06:05:17.141508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.305 [2024-12-08 06:05:17.141648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.305 [2024-12-08 06:05:17.141780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.141844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.305 [2024-12-08 06:05:17.141860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.305 [2024-12-08 06:05:17.141883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.305 [2024-12-08 06:05:17.141900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.305 [2024-12-08 06:05:17.142082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.725 ms, result 0 00:17:54.305 00:17:54.305 00:17:54.563 06:05:17 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86772 00:17:54.563 06:05:17 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:54.563 06:05:17 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86772 00:17:54.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.563 06:05:17 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86772 ']' 00:17:54.563 06:05:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.563 06:05:17 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.563 06:05:17 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.563 06:05:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.563 06:05:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:54.563 [2024-12-08 06:05:17.513388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:54.563 [2024-12-08 06:05:17.513803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86772 ] 00:17:54.822 [2024-12-08 06:05:17.662540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.822 [2024-12-08 06:05:17.696174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.387 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.387 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:55.387 06:05:18 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:55.644 [2024-12-08 06:05:18.662867] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.644 [2024-12-08 06:05:18.662939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.904 [2024-12-08 06:05:18.835634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.835697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.904 [2024-12-08 06:05:18.835716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:55.904 [2024-12-08 06:05:18.835730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.838501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.838548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.904 [2024-12-08 06:05:18.838567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:17:55.904 [2024-12-08 06:05:18.838580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.838734] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.904 [2024-12-08 06:05:18.838999] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.904 [2024-12-08 06:05:18.839035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.839049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.904 [2024-12-08 06:05:18.839069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:55.904 [2024-12-08 06:05:18.839081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.840536] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:55.904 [2024-12-08 06:05:18.842833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.842873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:55.904 [2024-12-08 06:05:18.842891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:17:55.904 [2024-12-08 06:05:18.842909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.842979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.842997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:55.904 [2024-12-08 06:05:18.843021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:55.904 [2024-12-08 06:05:18.843031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.847147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.847192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.904 [2024-12-08 06:05:18.847226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.062 ms 00:17:55.904 [2024-12-08 06:05:18.847236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.847404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.847434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.904 [2024-12-08 06:05:18.847448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:55.904 [2024-12-08 06:05:18.847465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.847529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.847547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.904 [2024-12-08 06:05:18.847577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:55.904 [2024-12-08 06:05:18.847622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.847675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:55.904 [2024-12-08 06:05:18.849033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.849073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.904 [2024-12-08 06:05:18.849087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:17:55.904 [2024-12-08 06:05:18.849098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.849139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.849156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.904 [2024-12-08 06:05:18.849167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:55.904 [2024-12-08 06:05:18.849178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.849255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:55.904 [2024-12-08 06:05:18.849282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:55.904 [2024-12-08 06:05:18.849320] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:55.904 [2024-12-08 06:05:18.849357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:55.904 [2024-12-08 06:05:18.849461] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.904 [2024-12-08 06:05:18.849478] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.904 [2024-12-08 06:05:18.849508] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.904 [2024-12-08 06:05:18.849540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.904 [2024-12-08 06:05:18.849578] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.904 [2024-12-08 06:05:18.849595] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:55.904 [2024-12-08 06:05:18.849606] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.904 [2024-12-08 06:05:18.849618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.904 [2024-12-08 06:05:18.849628] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.904 [2024-12-08 06:05:18.849641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.849654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.904 [2024-12-08 06:05:18.849667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:55.904 [2024-12-08 06:05:18.849677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.849774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.904 [2024-12-08 06:05:18.849788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.904 [2024-12-08 06:05:18.849801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:55.904 [2024-12-08 06:05:18.849812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.904 [2024-12-08 06:05:18.849919] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.904 [2024-12-08 06:05:18.849934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.904 [2024-12-08 06:05:18.849951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.905 [2024-12-08 06:05:18.849962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.849977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.905 [2024-12-08 06:05:18.849987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.849998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.905 [2024-12-08 06:05:18.850022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.905 [2024-12-08 06:05:18.850044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.905 [2024-12-08 06:05:18.850054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:55.905 [2024-12-08 06:05:18.850065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.905 [2024-12-08 06:05:18.850075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.905 [2024-12-08 06:05:18.850087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:55.905 [2024-12-08 06:05:18.850097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.905 [2024-12-08 06:05:18.850120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.905 [2024-12-08 06:05:18.850155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.905 [2024-12-08 06:05:18.850186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.905 [2024-12-08 06:05:18.850225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.905 [2024-12-08 06:05:18.850290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.905 [2024-12-08 06:05:18.850340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.905 [2024-12-08 06:05:18.850361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.905 [2024-12-08 06:05:18.850372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:55.905 [2024-12-08 06:05:18.850387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.905 [2024-12-08 06:05:18.850398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.905 [2024-12-08 06:05:18.850410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:55.905 [2024-12-08 06:05:18.850419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.905 [2024-12-08 06:05:18.850442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:55.905 [2024-12-08 06:05:18.850454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850464] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.905 [2024-12-08 06:05:18.850476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.905 [2024-12-08 06:05:18.850496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.905 [2024-12-08 06:05:18.850528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.905 [2024-12-08 06:05:18.850542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.905 [2024-12-08 06:05:18.850552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.905 [2024-12-08 06:05:18.850564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.905 [2024-12-08 06:05:18.850574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.905 [2024-12-08 06:05:18.850590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.905 [2024-12-08 06:05:18.850602] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.905 [2024-12-08 06:05:18.850625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:55.905 [2024-12-08 06:05:18.850652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:55.905 [2024-12-08 06:05:18.850663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:55.905 [2024-12-08 06:05:18.850676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:55.905 [2024-12-08 06:05:18.850687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:55.905 [2024-12-08 06:05:18.850699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:55.905 [2024-12-08 06:05:18.850725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:55.905 [2024-12-08 06:05:18.850737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:55.905 [2024-12-08 06:05:18.850747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:55.905 [2024-12-08 06:05:18.850760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:55.905 [2024-12-08 06:05:18.850828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.905 [2024-12-08 06:05:18.850843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.905 [2024-12-08 06:05:18.850868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.905 [2024-12-08 06:05:18.850878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.905 [2024-12-08 06:05:18.850891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.905 [2024-12-08 06:05:18.850902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.850918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.905 [2024-12-08 06:05:18.850936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:17:55.905 [2024-12-08 06:05:18.850950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.905 [2024-12-08 06:05:18.858970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.859022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.905 [2024-12-08 06:05:18.859039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.935 ms 00:17:55.905 [2024-12-08 06:05:18.859051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.905 [2024-12-08 06:05:18.859228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.859254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:55.905 [2024-12-08 06:05:18.859269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:55.905 [2024-12-08 06:05:18.859282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.905 [2024-12-08 06:05:18.867150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.867219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.905 [2024-12-08 06:05:18.867236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.841 ms 00:17:55.905 [2024-12-08 06:05:18.867248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.905 [2024-12-08 06:05:18.867366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.867392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.905 [2024-12-08 06:05:18.867406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.905 [2024-12-08 06:05:18.867419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.905 [2024-12-08 06:05:18.867810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.867898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.905 [2024-12-08 06:05:18.867917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:55.905 [2024-12-08 06:05:18.867930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.905 [2024-12-08 06:05:18.868079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.905 [2024-12-08 06:05:18.868116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.905 [2024-12-08 06:05:18.868130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:55.906 [2024-12-08 06:05:18.868142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.884969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.885032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.906 [2024-12-08 06:05:18.885057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.798 ms 00:17:55.906 [2024-12-08 06:05:18.885075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.888227] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:55.906 [2024-12-08 06:05:18.888326] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:55.906 [2024-12-08 06:05:18.888367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.888397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:55.906 [2024-12-08 06:05:18.888415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:17:55.906 [2024-12-08 06:05:18.888433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.903270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.903467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:55.906 [2024-12-08 06:05:18.903538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.764 ms 00:17:55.906 [2024-12-08 06:05:18.903557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.905619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.905661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:55.906 [2024-12-08 06:05:18.905675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:17:55.906 [2024-12-08 06:05:18.905686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.907329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.907369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:55.906 [2024-12-08 06:05:18.907383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:17:55.906 [2024-12-08 06:05:18.907395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.907828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.907868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:55.906 [2024-12-08 06:05:18.907886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:17:55.906 [2024-12-08 06:05:18.907900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.923327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.923437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:55.906 [2024-12-08 06:05:18.923459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.356 ms 00:17:55.906 [2024-12-08 06:05:18.923486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.931731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:55.906 [2024-12-08 06:05:18.944852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.944919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:55.906 [2024-12-08 06:05:18.944958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.244 ms 00:17:55.906 [2024-12-08 06:05:18.944970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.945139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.945171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:55.906 [2024-12-08 06:05:18.945187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:55.906 [2024-12-08 06:05:18.945250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.945358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.945392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:55.906 [2024-12-08 06:05:18.945426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:55.906 [2024-12-08 06:05:18.945449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.945505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.945523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:55.906 [2024-12-08 06:05:18.945541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:55.906 [2024-12-08 06:05:18.945552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.906 [2024-12-08 06:05:18.945597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:55.906 [2024-12-08 06:05:18.945617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.906 [2024-12-08 06:05:18.945630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:55.906 [2024-12-08 06:05:18.945643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:55.906 [2024-12-08 06:05:18.945656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.165 [2024-12-08 06:05:18.949573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.165 [2024-12-08 06:05:18.949626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.165 [2024-12-08 06:05:18.949659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:17:56.165 [2024-12-08 06:05:18.949674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.165 [2024-12-08 06:05:18.949828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.165 [2024-12-08 06:05:18.949860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.165 [2024-12-08 06:05:18.949874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:56.165 [2024-12-08 06:05:18.949886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.165 [2024-12-08 06:05:18.951026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.165 [2024-12-08 06:05:18.952346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.058 ms, result 0 00:17:56.165 [2024-12-08 06:05:18.953410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.165 Some configs were skipped because the RPC state that can call them passed over. 00:17:56.165 06:05:18 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:56.424 [2024-12-08 06:05:19.264377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.424 [2024-12-08 06:05:19.264648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:56.424 [2024-12-08 06:05:19.264793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:17:56.424 [2024-12-08 06:05:19.264911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.424 [2024-12-08 06:05:19.265018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.164 ms, result 0 00:17:56.424 true 00:17:56.424 06:05:19 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:56.686 [2024-12-08 06:05:19.516521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.516800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:56.686 [2024-12-08 06:05:19.516930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:17:56.686 [2024-12-08 06:05:19.516959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.517028] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.878 ms, result 0 00:17:56.686 true 00:17:56.686 06:05:19 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86772 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86772 ']' 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86772 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86772 00:17:56.686 killing process with pid 86772 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86772' 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86772 00:17:56.686 06:05:19 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86772 00:17:56.686 [2024-12-08 06:05:19.658617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.658683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:56.686 [2024-12-08 06:05:19.658721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.686 [2024-12-08 06:05:19.658732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.658768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:56.686 [2024-12-08 06:05:19.659208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.659265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:56.686 [2024-12-08 06:05:19.659278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:17:56.686 [2024-12-08 06:05:19.659301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.659653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.659702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:56.686 [2024-12-08 06:05:19.659718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:56.686 [2024-12-08 06:05:19.659732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.664151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.664238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:56.686 [2024-12-08 06:05:19.664273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.392 ms 00:17:56.686 [2024-12-08 06:05:19.664287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.672806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.673027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:56.686 [2024-12-08 06:05:19.673054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.441 ms 00:17:56.686 [2024-12-08 06:05:19.673072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.674718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.674777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.686 [2024-12-08 06:05:19.674792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:17:56.686 [2024-12-08 06:05:19.674804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.677828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.686 [2024-12-08 06:05:19.677872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.686 [2024-12-08 06:05:19.677887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:17:56.686 [2024-12-08 06:05:19.677899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.686 [2024-12-08 06:05:19.678005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.687 [2024-12-08 06:05:19.678026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.687 [2024-12-08 06:05:19.678037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:56.687 [2024-12-08 06:05:19.678064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.687 [2024-12-08 06:05:19.680185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.687 [2024-12-08 06:05:19.680300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:56.687 [2024-12-08 06:05:19.680331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:17:56.687 [2024-12-08 06:05:19.680363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.687 [2024-12-08 06:05:19.681992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.687 [2024-12-08 06:05:19.682035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:56.687 [2024-12-08 06:05:19.682049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:17:56.687 [2024-12-08 06:05:19.682060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.687 [2024-12-08 06:05:19.683514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.687 [2024-12-08 06:05:19.683557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.687 [2024-12-08 06:05:19.683573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:17:56.687 [2024-12-08 06:05:19.683586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.687 [2024-12-08 06:05:19.684717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.687 [2024-12-08 06:05:19.684771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.687 [2024-12-08 06:05:19.684786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:17:56.687 [2024-12-08 06:05:19.684798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.687 [2024-12-08 06:05:19.684837] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.687 [2024-12-08 06:05:19.684862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.684989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.687 [2024-12-08 06:05:19.685876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.685995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.688 [2024-12-08 06:05:19.686220] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.688 [2024-12-08 06:05:19.686231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:17:56.688 [2024-12-08 06:05:19.686583] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.688 [2024-12-08 06:05:19.686637] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.688 [2024-12-08 06:05:19.686789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.688 [2024-12-08 06:05:19.686841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.688 [2024-12-08 06:05:19.686926] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.688 [2024-12-08 06:05:19.687018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.688 [2024-12-08 06:05:19.687127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.688 [2024-12-08 06:05:19.687172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.688 [2024-12-08 06:05:19.687263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.688 [2024-12-08 06:05:19.687356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.688 [2024-12-08 06:05:19.687525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.688 [2024-12-08 06:05:19.687650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:17:56.688 [2024-12-08 06:05:19.687709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.689196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.688 [2024-12-08 06:05:19.689370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.688 [2024-12-08 06:05:19.689488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:17:56.688 [2024-12-08 06:05:19.689541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.689704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.688 [2024-12-08 06:05:19.689789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.688 [2024-12-08 06:05:19.689884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:56.688 [2024-12-08 06:05:19.689934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.695332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.695380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.688 [2024-12-08 06:05:19.695396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.695409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.695534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.695559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.688 [2024-12-08 06:05:19.695572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.695588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.695655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.695686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.688 [2024-12-08 06:05:19.695702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.695716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.695742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.695758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.688 [2024-12-08 06:05:19.695770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.695783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.704030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.704096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.688 [2024-12-08 06:05:19.704112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.704125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.710798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.710850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.688 [2024-12-08 06:05:19.710867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.710882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.710970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.711002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.688 [2024-12-08 06:05:19.711021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.711037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.711071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.711086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.688 [2024-12-08 06:05:19.711097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.711108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.711219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.711243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.688 [2024-12-08 06:05:19.711255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.711267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.711342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.711363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.688 [2024-12-08 06:05:19.711374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.711388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.711433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.711450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.688 [2024-12-08 06:05:19.711460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.711510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.711577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.688 [2024-12-08 06:05:19.711597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.688 [2024-12-08 06:05:19.711609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.688 [2024-12-08 06:05:19.711622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.688 [2024-12-08 06:05:19.711788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.135 ms, result 0 00:17:56.961 06:05:19 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.220 [2024-12-08 06:05:20.020516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:57.220 [2024-12-08 06:05:20.020693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86808 ] 00:17:57.220 [2024-12-08 06:05:20.162259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.220 [2024-12-08 06:05:20.196082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.481 [2024-12-08 06:05:20.278809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.481 [2024-12-08 06:05:20.279151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.481 [2024-12-08 06:05:20.434791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.434857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:57.481 [2024-12-08 06:05:20.434894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:57.481 [2024-12-08 06:05:20.434905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.437708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.437763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.481 [2024-12-08 06:05:20.437796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:17:57.481 [2024-12-08 06:05:20.437813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.437963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:57.481 [2024-12-08 06:05:20.438361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:57.481 [2024-12-08 06:05:20.438396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.438410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.481 [2024-12-08 06:05:20.438429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:17:57.481 [2024-12-08 06:05:20.438451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.439857] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:57.481 [2024-12-08 06:05:20.442150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.442227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:57.481 [2024-12-08 06:05:20.442252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:17:57.481 [2024-12-08 06:05:20.442264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.442388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.442416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:57.481 [2024-12-08 06:05:20.442430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:57.481 [2024-12-08 06:05:20.442442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.446908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.446955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.481 [2024-12-08 06:05:20.446987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.392 ms 00:17:57.481 [2024-12-08 06:05:20.446997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.447176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.447201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.481 [2024-12-08 06:05:20.447214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:57.481 [2024-12-08 06:05:20.447261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.447302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.447356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:57.481 [2024-12-08 06:05:20.447376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:57.481 [2024-12-08 06:05:20.447387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.447422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:57.481 [2024-12-08 06:05:20.448885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.449069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.481 [2024-12-08 06:05:20.449115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:17:57.481 [2024-12-08 06:05:20.449128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.449195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.481 [2024-12-08 06:05:20.449235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:57.481 [2024-12-08 06:05:20.449261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:57.481 [2024-12-08 06:05:20.449275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.481 [2024-12-08 06:05:20.449308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:57.481 [2024-12-08 06:05:20.449336] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:57.481 [2024-12-08 06:05:20.449389] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:57.481 [2024-12-08 06:05:20.449421] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:57.481 [2024-12-08 06:05:20.449536] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:57.481 [2024-12-08 06:05:20.449559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:57.481 [2024-12-08 06:05:20.449584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:57.481 [2024-12-08 06:05:20.449599] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:57.481 [2024-12-08 06:05:20.449613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:57.481 [2024-12-08 06:05:20.449625] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:57.482 [2024-12-08 06:05:20.449652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:57.482 [2024-12-08 06:05:20.449663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:57.482 [2024-12-08 06:05:20.449673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:57.482 [2024-12-08 06:05:20.449699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.482 [2024-12-08 06:05:20.449714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:57.482 [2024-12-08 06:05:20.449730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:17:57.482 [2024-12-08 06:05:20.449741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.482 [2024-12-08 06:05:20.449837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.482 [2024-12-08 06:05:20.449852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:57.482 [2024-12-08 06:05:20.449864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:57.482 [2024-12-08 06:05:20.449875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.482 [2024-12-08 06:05:20.449995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:57.482 [2024-12-08 06:05:20.450022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:57.482 [2024-12-08 06:05:20.450034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:57.482 [2024-12-08 06:05:20.450079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:57.482 [2024-12-08 06:05:20.450110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.482 [2024-12-08 06:05:20.450134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:57.482 [2024-12-08 06:05:20.450143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:57.482 [2024-12-08 06:05:20.450153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.482 [2024-12-08 06:05:20.450162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:57.482 [2024-12-08 06:05:20.450172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:57.482 [2024-12-08 06:05:20.450182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:57.482 [2024-12-08 06:05:20.450218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:57.482 [2024-12-08 06:05:20.450267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:57.482 [2024-12-08 06:05:20.450298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:57.482 [2024-12-08 06:05:20.450361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:57.482 [2024-12-08 06:05:20.450415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:57.482 [2024-12-08 06:05:20.450447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.482 [2024-12-08 06:05:20.450468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:57.482 [2024-12-08 06:05:20.450478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:57.482 [2024-12-08 06:05:20.450489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.482 [2024-12-08 06:05:20.450499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:57.482 [2024-12-08 06:05:20.450510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:57.482 [2024-12-08 06:05:20.450520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:57.482 [2024-12-08 06:05:20.450545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:57.482 [2024-12-08 06:05:20.450557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450567] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:57.482 [2024-12-08 06:05:20.450578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:57.482 [2024-12-08 06:05:20.450589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.482 [2024-12-08 06:05:20.450613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:57.482 [2024-12-08 06:05:20.450624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:57.482 [2024-12-08 06:05:20.450635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:57.482 [2024-12-08 06:05:20.450645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:57.482 [2024-12-08 06:05:20.450656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:57.482 [2024-12-08 06:05:20.450668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:57.482 [2024-12-08 06:05:20.450681] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:57.482 [2024-12-08 06:05:20.450694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:57.482 [2024-12-08 06:05:20.450730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:57.482 [2024-12-08 06:05:20.450745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:57.482 [2024-12-08 06:05:20.450758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:57.482 [2024-12-08 06:05:20.450769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:57.482 [2024-12-08 06:05:20.450781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:57.482 [2024-12-08 06:05:20.450792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:57.482 [2024-12-08 06:05:20.450814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:57.482 [2024-12-08 06:05:20.450826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:57.482 [2024-12-08 06:05:20.450837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:57.482 [2024-12-08 06:05:20.450894] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:57.482 [2024-12-08 06:05:20.450907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:57.482 [2024-12-08 06:05:20.450941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:57.482 [2024-12-08 06:05:20.450955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:57.482 [2024-12-08 06:05:20.450968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:57.482 [2024-12-08 06:05:20.450981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.482 [2024-12-08 06:05:20.450993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:57.482 [2024-12-08 06:05:20.451009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:17:57.482 [2024-12-08 06:05:20.451021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.482 [2024-12-08 06:05:20.470589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.482 [2024-12-08 06:05:20.470879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.482 [2024-12-08 06:05:20.470927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.474 ms 00:17:57.482 [2024-12-08 06:05:20.470946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.482 [2024-12-08 06:05:20.471238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.471268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:57.483 [2024-12-08 06:05:20.471286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:57.483 [2024-12-08 06:05:20.471310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.481451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.481495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.483 [2024-12-08 06:05:20.481530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.096 ms 00:17:57.483 [2024-12-08 06:05:20.481542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.481646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.481671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.483 [2024-12-08 06:05:20.481689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.483 [2024-12-08 06:05:20.481701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.482030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.482070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.483 [2024-12-08 06:05:20.482085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:17:57.483 [2024-12-08 06:05:20.482096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.482283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.482305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.483 [2024-12-08 06:05:20.482317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:57.483 [2024-12-08 06:05:20.482329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.487315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.487353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.483 [2024-12-08 06:05:20.487385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:17:57.483 [2024-12-08 06:05:20.487396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.489837] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:57.483 [2024-12-08 06:05:20.489882] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:57.483 [2024-12-08 06:05:20.489927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.489939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:57.483 [2024-12-08 06:05:20.489951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:17:57.483 [2024-12-08 06:05:20.489961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.506220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.506258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:57.483 [2024-12-08 06:05:20.506291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.206 ms 00:17:57.483 [2024-12-08 06:05:20.506302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.508174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.508394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:57.483 [2024-12-08 06:05:20.508436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:17:57.483 [2024-12-08 06:05:20.508449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.510056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.510087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:57.483 [2024-12-08 06:05:20.510116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:17:57.483 [2024-12-08 06:05:20.510134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.483 [2024-12-08 06:05:20.510482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.483 [2024-12-08 06:05:20.510502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.483 [2024-12-08 06:05:20.510514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:57.483 [2024-12-08 06:05:20.510529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-12-08 06:05:20.526186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.742 [2024-12-08 06:05:20.526266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:57.742 [2024-12-08 06:05:20.526302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.613 ms 00:17:57.742 [2024-12-08 06:05:20.526313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-12-08 06:05:20.534171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:57.742 [2024-12-08 06:05:20.546569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.742 [2024-12-08 06:05:20.546867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:57.742 [2024-12-08 06:05:20.546897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.121 ms 00:17:57.742 [2024-12-08 06:05:20.546911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-12-08 06:05:20.547044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.742 [2024-12-08 06:05:20.547089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:57.742 [2024-12-08 06:05:20.547103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:57.742 [2024-12-08 06:05:20.547127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-12-08 06:05:20.547226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.743 [2024-12-08 06:05:20.547281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.743 [2024-12-08 06:05:20.547297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:57.743 [2024-12-08 06:05:20.547308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.743 [2024-12-08 06:05:20.547345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.743 [2024-12-08 06:05:20.547376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.743 [2024-12-08 06:05:20.547388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:57.743 [2024-12-08 06:05:20.547413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.743 [2024-12-08 06:05:20.547457] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:57.743 [2024-12-08 06:05:20.547502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.743 [2024-12-08 06:05:20.547521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:57.743 [2024-12-08 06:05:20.547533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:57.743 [2024-12-08 06:05:20.547560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.743 [2024-12-08 06:05:20.551100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.743 [2024-12-08 06:05:20.551139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.743 [2024-12-08 06:05:20.551171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.491 ms 00:17:57.743 [2024-12-08 06:05:20.551183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.743 [2024-12-08 06:05:20.551316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.743 [2024-12-08 06:05:20.551344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.743 [2024-12-08 06:05:20.551361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:57.743 [2024-12-08 06:05:20.551384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.743 [2024-12-08 06:05:20.552653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:57.743 [2024-12-08 06:05:20.553938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.542 ms, result 0 00:17:57.743 [2024-12-08 06:05:20.554680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.743 [2024-12-08 06:05:20.564030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.680  [2024-12-08T06:05:22.661Z] Copying: 25/256 [MB] (25 MBps) [2024-12-08T06:05:24.039Z] Copying: 47/256 [MB] (22 MBps) [2024-12-08T06:05:24.975Z] Copying: 70/256 [MB] (22 MBps) [2024-12-08T06:05:25.908Z] Copying: 92/256 [MB] (22 MBps) [2024-12-08T06:05:26.843Z] Copying: 115/256 [MB] (22 MBps) [2024-12-08T06:05:27.778Z] Copying: 138/256 [MB] (22 MBps) [2024-12-08T06:05:28.713Z] Copying: 160/256 [MB] (22 MBps) [2024-12-08T06:05:29.647Z] Copying: 182/256 [MB] (22 MBps) [2024-12-08T06:05:31.019Z] Copying: 205/256 [MB] (22 MBps) [2024-12-08T06:05:31.954Z] Copying: 228/256 [MB] (23 MBps) [2024-12-08T06:05:31.954Z] Copying: 251/256 [MB] (22 MBps) [2024-12-08T06:05:32.215Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-08 06:05:31.965199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.170 [2024-12-08 06:05:31.966455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.966508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.170 [2024-12-08 06:05:31.966533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.170 [2024-12-08 06:05:31.966556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.966594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:09.170 [2024-12-08 06:05:31.967097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.967345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.170 [2024-12-08 06:05:31.967378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:18:09.170 [2024-12-08 06:05:31.967395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.967793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.967829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.170 [2024-12-08 06:05:31.967866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:18:09.170 [2024-12-08 06:05:31.967891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.972763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.972810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.170 [2024-12-08 06:05:31.972846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 00:18:09.170 [2024-12-08 06:05:31.972861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.980767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.980801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.170 [2024-12-08 06:05:31.980831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.873 ms 00:18:09.170 [2024-12-08 06:05:31.980841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.982379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.982415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.170 [2024-12-08 06:05:31.982445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:18:09.170 [2024-12-08 06:05:31.982455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.985486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.985554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.170 [2024-12-08 06:05:31.985593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:18:09.170 [2024-12-08 06:05:31.985603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.985723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.170 [2024-12-08 06:05:31.985755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.170 [2024-12-08 06:05:31.985767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:09.170 [2024-12-08 06:05:31.985777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.170 [2024-12-08 06:05:31.987694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.171 [2024-12-08 06:05:31.987729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.171 [2024-12-08 06:05:31.987759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:18:09.171 [2024-12-08 06:05:31.987785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.171 [2024-12-08 06:05:31.989297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.171 [2024-12-08 06:05:31.989343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.171 [2024-12-08 06:05:31.989372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:18:09.171 [2024-12-08 06:05:31.989382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.171 [2024-12-08 06:05:31.990653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.171 [2024-12-08 06:05:31.990689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.171 [2024-12-08 06:05:31.990733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:18:09.171 [2024-12-08 06:05:31.990757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.171 [2024-12-08 06:05:31.992042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.171 [2024-12-08 06:05:31.992216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.171 [2024-12-08 06:05:31.992257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:18:09.171 [2024-12-08 06:05:31.992268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.171 [2024-12-08 06:05:31.992296] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.171 [2024-12-08 06:05:31.992323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.992993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.171 [2024-12-08 06:05:31.993127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.172 [2024-12-08 06:05:31.993445] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.172 [2024-12-08 06:05:31.993456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c471906e-ccc4-48c5-8290-7cb3cbdef2d5 00:18:09.172 [2024-12-08 06:05:31.993467] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.172 [2024-12-08 06:05:31.993477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.172 [2024-12-08 06:05:31.993497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.172 [2024-12-08 06:05:31.993507] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.172 [2024-12-08 06:05:31.993517] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.172 [2024-12-08 06:05:31.993528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.172 [2024-12-08 06:05:31.993538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.172 [2024-12-08 06:05:31.993547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.172 [2024-12-08 06:05:31.993556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.172 [2024-12-08 06:05:31.993581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.172 [2024-12-08 06:05:31.993593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.172 [2024-12-08 06:05:31.993610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:18:09.172 [2024-12-08 06:05:31.993620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:31.994894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.172 [2024-12-08 06:05:31.994930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.172 [2024-12-08 06:05:31.994943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:18:09.172 [2024-12-08 06:05:31.994954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:31.995029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.172 [2024-12-08 06:05:31.995050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.172 [2024-12-08 06:05:31.995062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:09.172 [2024-12-08 06:05:31.995082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:31.999506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:31.999676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.172 [2024-12-08 06:05:31.999814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:31.999835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:31.999913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:31.999937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.172 [2024-12-08 06:05:31.999949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:31.999960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.000014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.000031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.172 [2024-12-08 06:05:32.000043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.000053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.000090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.000102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.172 [2024-12-08 06:05:32.000119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.000129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.007899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.007978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.172 [2024-12-08 06:05:32.008009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.008033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.172 [2024-12-08 06:05:32.014296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.172 [2024-12-08 06:05:32.014380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.172 [2024-12-08 06:05:32.014457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.172 [2024-12-08 06:05:32.014648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.172 [2024-12-08 06:05:32.014736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.172 [2024-12-08 06:05:32.014836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.014900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.172 [2024-12-08 06:05:32.014916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.172 [2024-12-08 06:05:32.014928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.172 [2024-12-08 06:05:32.014944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.172 [2024-12-08 06:05:32.015095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.617 ms, result 0 00:18:09.431 00:18:09.431 00:18:09.431 06:05:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:10.004 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:10.004 06:05:32 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86772 00:18:10.004 Process with pid 86772 is not found 00:18:10.004 06:05:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86772 ']' 00:18:10.004 06:05:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86772 00:18:10.004 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86772) - No such process 00:18:10.004 06:05:32 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86772 is not found' 00:18:10.004 00:18:10.004 real 0m57.349s 00:18:10.004 user 1m20.851s 00:18:10.004 sys 0m6.066s 00:18:10.004 06:05:32 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.004 06:05:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:10.004 ************************************ 00:18:10.004 END TEST ftl_trim 00:18:10.004 ************************************ 00:18:10.004 06:05:32 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:10.004 06:05:32 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:10.004 06:05:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.004 06:05:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:10.004 ************************************ 00:18:10.004 START TEST ftl_restore 00:18:10.004 ************************************ 00:18:10.004 06:05:32 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:10.004 * Looking for test storage... 00:18:10.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.270 06:05:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.270 --rc genhtml_branch_coverage=1 00:18:10.270 --rc genhtml_function_coverage=1 00:18:10.270 --rc genhtml_legend=1 00:18:10.270 --rc geninfo_all_blocks=1 00:18:10.270 --rc geninfo_unexecuted_blocks=1 00:18:10.270 00:18:10.270 ' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.270 --rc genhtml_branch_coverage=1 00:18:10.270 --rc genhtml_function_coverage=1 00:18:10.270 --rc genhtml_legend=1 00:18:10.270 --rc geninfo_all_blocks=1 00:18:10.270 --rc geninfo_unexecuted_blocks=1 00:18:10.270 00:18:10.270 ' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.270 --rc genhtml_branch_coverage=1 00:18:10.270 --rc genhtml_function_coverage=1 00:18:10.270 --rc genhtml_legend=1 00:18:10.270 --rc geninfo_all_blocks=1 00:18:10.270 --rc geninfo_unexecuted_blocks=1 00:18:10.270 00:18:10.270 ' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.270 --rc genhtml_branch_coverage=1 00:18:10.270 --rc genhtml_function_coverage=1 00:18:10.270 --rc genhtml_legend=1 00:18:10.270 --rc geninfo_all_blocks=1 00:18:10.270 --rc geninfo_unexecuted_blocks=1 00:18:10.270 00:18:10.270 ' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Pkc94lQ1Bd 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:10.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87007 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87007 00:18:10.270 06:05:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 87007 ']' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.270 06:05:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:10.270 [2024-12-08 06:05:33.304213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:10.270 [2024-12-08 06:05:33.304644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87007 ] 00:18:10.529 [2024-12-08 06:05:33.457258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.529 [2024-12-08 06:05:33.502646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.466 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.466 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:11.466 06:05:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:11.466 06:05:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:11.466 06:05:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:11.466 06:05:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:11.466 06:05:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:11.466 06:05:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:11.725 06:05:34 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:11.725 06:05:34 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:11.725 06:05:34 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:11.725 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:11.725 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:11.725 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:11.725 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:11.725 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:11.985 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:11.985 { 00:18:11.985 "name": "nvme0n1", 00:18:11.985 "aliases": [ 00:18:11.985 "ae907fd9-bc5a-401b-a7ff-999609e8d188" 00:18:11.985 ], 00:18:11.985 "product_name": "NVMe disk", 00:18:11.985 "block_size": 4096, 00:18:11.985 "num_blocks": 1310720, 00:18:11.985 "uuid": "ae907fd9-bc5a-401b-a7ff-999609e8d188", 00:18:11.985 "numa_id": -1, 00:18:11.985 "assigned_rate_limits": { 00:18:11.985 "rw_ios_per_sec": 0, 00:18:11.985 "rw_mbytes_per_sec": 0, 00:18:11.985 "r_mbytes_per_sec": 0, 00:18:11.985 "w_mbytes_per_sec": 0 00:18:11.985 }, 00:18:11.985 "claimed": true, 00:18:11.985 "claim_type": "read_many_write_one", 00:18:11.985 "zoned": false, 00:18:11.985 "supported_io_types": { 00:18:11.985 "read": true, 00:18:11.985 "write": true, 00:18:11.985 "unmap": true, 00:18:11.985 "flush": true, 00:18:11.985 "reset": true, 00:18:11.985 "nvme_admin": true, 00:18:11.985 "nvme_io": true, 00:18:11.985 "nvme_io_md": false, 00:18:11.985 "write_zeroes": true, 00:18:11.985 "zcopy": false, 00:18:11.985 "get_zone_info": false, 00:18:11.985 "zone_management": false, 00:18:11.985 "zone_append": false, 00:18:11.985 "compare": true, 00:18:11.985 "compare_and_write": false, 00:18:11.985 "abort": true, 00:18:11.985 "seek_hole": false, 00:18:11.985 "seek_data": false, 00:18:11.985 "copy": true, 00:18:11.985 "nvme_iov_md": false 00:18:11.985 }, 00:18:11.985 "driver_specific": { 00:18:11.985 "nvme": [ 00:18:11.985 { 00:18:11.985 "pci_address": "0000:00:11.0", 00:18:11.985 "trid": { 00:18:11.985 "trtype": "PCIe", 00:18:11.985 "traddr": "0000:00:11.0" 00:18:11.985 }, 00:18:11.985 "ctrlr_data": { 00:18:11.985 "cntlid": 0, 00:18:11.985 "vendor_id": "0x1b36", 00:18:11.985 "model_number": "QEMU NVMe Ctrl", 00:18:11.985 "serial_number": "12341", 00:18:11.985 "firmware_revision": "8.0.0", 00:18:11.985 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:11.985 "oacs": { 00:18:11.985 "security": 0, 00:18:11.985 "format": 1, 00:18:11.985 "firmware": 0, 00:18:11.985 "ns_manage": 1 00:18:11.985 }, 00:18:11.985 "multi_ctrlr": false, 00:18:11.985 "ana_reporting": false 00:18:11.985 }, 00:18:11.985 "vs": { 00:18:11.985 "nvme_version": "1.4" 00:18:11.985 }, 00:18:11.985 "ns_data": { 00:18:11.985 "id": 1, 00:18:11.985 "can_share": false 00:18:11.985 } 00:18:11.985 } 00:18:11.985 ], 00:18:11.985 "mp_policy": "active_passive" 00:18:11.985 } 00:18:11.985 } 00:18:11.985 ]' 00:18:11.985 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:11.985 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:11.985 06:05:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:12.246 06:05:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:12.246 06:05:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:12.246 06:05:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:12.246 06:05:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:12.246 06:05:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:12.246 06:05:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:12.246 06:05:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:12.246 06:05:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:12.507 06:05:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5bec372f-74a4-4c96-8491-a140adb2cb3b 00:18:12.507 06:05:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:12.507 06:05:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bec372f-74a4-4c96-8491-a140adb2cb3b 00:18:12.765 06:05:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:13.023 06:05:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=11a79ce9-919f-4f2f-aa96-0f0f91450447 00:18:13.023 06:05:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 11a79ce9-919f-4f2f-aa96-0f0f91450447 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:13.281 06:05:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.281 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.281 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:13.281 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:13.281 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:13.281 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:13.539 { 00:18:13.539 "name": "e56010be-a0e1-4159-b275-73349858b1b0", 00:18:13.539 "aliases": [ 00:18:13.539 "lvs/nvme0n1p0" 00:18:13.539 ], 00:18:13.539 "product_name": "Logical Volume", 00:18:13.539 "block_size": 4096, 00:18:13.539 "num_blocks": 26476544, 00:18:13.539 "uuid": "e56010be-a0e1-4159-b275-73349858b1b0", 00:18:13.539 "assigned_rate_limits": { 00:18:13.539 "rw_ios_per_sec": 0, 00:18:13.539 "rw_mbytes_per_sec": 0, 00:18:13.539 "r_mbytes_per_sec": 0, 00:18:13.539 "w_mbytes_per_sec": 0 00:18:13.539 }, 00:18:13.539 "claimed": false, 00:18:13.539 "zoned": false, 00:18:13.539 "supported_io_types": { 00:18:13.539 "read": true, 00:18:13.539 "write": true, 00:18:13.539 "unmap": true, 00:18:13.539 "flush": false, 00:18:13.539 "reset": true, 00:18:13.539 "nvme_admin": false, 00:18:13.539 "nvme_io": false, 00:18:13.539 "nvme_io_md": false, 00:18:13.539 "write_zeroes": true, 00:18:13.539 "zcopy": false, 00:18:13.539 "get_zone_info": false, 00:18:13.539 "zone_management": false, 00:18:13.539 "zone_append": false, 00:18:13.539 "compare": false, 00:18:13.539 "compare_and_write": false, 00:18:13.539 "abort": false, 00:18:13.539 "seek_hole": true, 00:18:13.539 "seek_data": true, 00:18:13.539 "copy": false, 00:18:13.539 "nvme_iov_md": false 00:18:13.539 }, 00:18:13.539 "driver_specific": { 00:18:13.539 "lvol": { 00:18:13.539 "lvol_store_uuid": "11a79ce9-919f-4f2f-aa96-0f0f91450447", 00:18:13.539 "base_bdev": "nvme0n1", 00:18:13.539 "thin_provision": true, 00:18:13.539 "num_allocated_clusters": 0, 00:18:13.539 "snapshot": false, 00:18:13.539 "clone": false, 00:18:13.539 "esnap_clone": false 00:18:13.539 } 00:18:13.539 } 00:18:13.539 } 00:18:13.539 ]' 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:13.539 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:13.539 06:05:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:13.539 06:05:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:13.539 06:05:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:13.797 06:05:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:13.797 06:05:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:13.797 06:05:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.797 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e56010be-a0e1-4159-b275-73349858b1b0 00:18:13.797 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:13.797 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:13.797 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:13.797 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e56010be-a0e1-4159-b275-73349858b1b0 00:18:14.055 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:14.055 { 00:18:14.055 "name": "e56010be-a0e1-4159-b275-73349858b1b0", 00:18:14.055 "aliases": [ 00:18:14.055 "lvs/nvme0n1p0" 00:18:14.055 ], 00:18:14.055 "product_name": "Logical Volume", 00:18:14.055 "block_size": 4096, 00:18:14.055 "num_blocks": 26476544, 00:18:14.055 "uuid": "e56010be-a0e1-4159-b275-73349858b1b0", 00:18:14.055 "assigned_rate_limits": { 00:18:14.055 "rw_ios_per_sec": 0, 00:18:14.055 "rw_mbytes_per_sec": 0, 00:18:14.055 "r_mbytes_per_sec": 0, 00:18:14.055 "w_mbytes_per_sec": 0 00:18:14.055 }, 00:18:14.055 "claimed": false, 00:18:14.055 "zoned": false, 00:18:14.055 "supported_io_types": { 00:18:14.055 "read": true, 00:18:14.055 "write": true, 00:18:14.055 "unmap": true, 00:18:14.055 "flush": false, 00:18:14.055 "reset": true, 00:18:14.055 "nvme_admin": false, 00:18:14.055 "nvme_io": false, 00:18:14.055 "nvme_io_md": false, 00:18:14.055 "write_zeroes": true, 00:18:14.055 "zcopy": false, 00:18:14.055 "get_zone_info": false, 00:18:14.055 "zone_management": false, 00:18:14.055 "zone_append": false, 00:18:14.055 "compare": false, 00:18:14.055 "compare_and_write": false, 00:18:14.055 "abort": false, 00:18:14.055 "seek_hole": true, 00:18:14.055 "seek_data": true, 00:18:14.055 "copy": false, 00:18:14.055 "nvme_iov_md": false 00:18:14.055 }, 00:18:14.055 "driver_specific": { 00:18:14.055 "lvol": { 00:18:14.055 "lvol_store_uuid": "11a79ce9-919f-4f2f-aa96-0f0f91450447", 00:18:14.055 "base_bdev": "nvme0n1", 00:18:14.055 "thin_provision": true, 00:18:14.055 "num_allocated_clusters": 0, 00:18:14.055 "snapshot": false, 00:18:14.056 "clone": false, 00:18:14.056 "esnap_clone": false 00:18:14.056 } 00:18:14.056 } 00:18:14.056 } 00:18:14.056 ]' 00:18:14.056 06:05:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:14.056 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:14.056 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:14.056 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:14.056 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:14.056 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:14.056 06:05:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:14.056 06:05:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:14.314 06:05:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:14.572 06:05:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e56010be-a0e1-4159-b275-73349858b1b0 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e56010be-a0e1-4159-b275-73349858b1b0 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e56010be-a0e1-4159-b275-73349858b1b0 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:14.572 { 00:18:14.572 "name": "e56010be-a0e1-4159-b275-73349858b1b0", 00:18:14.572 "aliases": [ 00:18:14.572 "lvs/nvme0n1p0" 00:18:14.572 ], 00:18:14.572 "product_name": "Logical Volume", 00:18:14.572 "block_size": 4096, 00:18:14.572 "num_blocks": 26476544, 00:18:14.572 "uuid": "e56010be-a0e1-4159-b275-73349858b1b0", 00:18:14.572 "assigned_rate_limits": { 00:18:14.572 "rw_ios_per_sec": 0, 00:18:14.572 "rw_mbytes_per_sec": 0, 00:18:14.572 "r_mbytes_per_sec": 0, 00:18:14.572 "w_mbytes_per_sec": 0 00:18:14.572 }, 00:18:14.572 "claimed": false, 00:18:14.572 "zoned": false, 00:18:14.572 "supported_io_types": { 00:18:14.572 "read": true, 00:18:14.572 "write": true, 00:18:14.572 "unmap": true, 00:18:14.572 "flush": false, 00:18:14.572 "reset": true, 00:18:14.572 "nvme_admin": false, 00:18:14.572 "nvme_io": false, 00:18:14.572 "nvme_io_md": false, 00:18:14.572 "write_zeroes": true, 00:18:14.572 "zcopy": false, 00:18:14.572 "get_zone_info": false, 00:18:14.572 "zone_management": false, 00:18:14.572 "zone_append": false, 00:18:14.572 "compare": false, 00:18:14.572 "compare_and_write": false, 00:18:14.572 "abort": false, 00:18:14.572 "seek_hole": true, 00:18:14.572 "seek_data": true, 00:18:14.572 "copy": false, 00:18:14.572 "nvme_iov_md": false 00:18:14.572 }, 00:18:14.572 "driver_specific": { 00:18:14.572 "lvol": { 00:18:14.572 "lvol_store_uuid": "11a79ce9-919f-4f2f-aa96-0f0f91450447", 00:18:14.572 "base_bdev": "nvme0n1", 00:18:14.572 "thin_provision": true, 00:18:14.572 "num_allocated_clusters": 0, 00:18:14.572 "snapshot": false, 00:18:14.572 "clone": false, 00:18:14.572 "esnap_clone": false 00:18:14.572 } 00:18:14.572 } 00:18:14.572 } 00:18:14.572 ]' 00:18:14.572 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:14.830 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:14.830 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:14.830 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:14.830 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:14.830 06:05:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e56010be-a0e1-4159-b275-73349858b1b0 --l2p_dram_limit 10' 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:14.830 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:14.830 06:05:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e56010be-a0e1-4159-b275-73349858b1b0 --l2p_dram_limit 10 -c nvc0n1p0 00:18:15.089 [2024-12-08 06:05:37.945806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.945905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:15.089 [2024-12-08 06:05:37.945925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:15.089 [2024-12-08 06:05:37.945938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.089 [2024-12-08 06:05:37.946029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.946052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.089 [2024-12-08 06:05:37.946064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:15.089 [2024-12-08 06:05:37.946078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.089 [2024-12-08 06:05:37.946117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:15.089 [2024-12-08 06:05:37.946551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:15.089 [2024-12-08 06:05:37.946593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.946607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.089 [2024-12-08 06:05:37.946622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:18:15.089 [2024-12-08 06:05:37.946650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.089 [2024-12-08 06:05:37.946849] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4 00:18:15.089 [2024-12-08 06:05:37.948072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.948128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:15.089 [2024-12-08 06:05:37.948164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:15.089 [2024-12-08 06:05:37.948175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.089 [2024-12-08 06:05:37.952894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.952953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.089 [2024-12-08 06:05:37.952993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.623 ms 00:18:15.089 [2024-12-08 06:05:37.953004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.089 [2024-12-08 06:05:37.953100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.953118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.089 [2024-12-08 06:05:37.953134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:15.089 [2024-12-08 06:05:37.953147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.089 [2024-12-08 06:05:37.953301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.089 [2024-12-08 06:05:37.953322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:15.090 [2024-12-08 06:05:37.953337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:15.090 [2024-12-08 06:05:37.953348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.090 [2024-12-08 06:05:37.953385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:15.090 [2024-12-08 06:05:37.954996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.090 [2024-12-08 06:05:37.955059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.090 [2024-12-08 06:05:37.955094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:18:15.090 [2024-12-08 06:05:37.955107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.090 [2024-12-08 06:05:37.955148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.090 [2024-12-08 06:05:37.955166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:15.090 [2024-12-08 06:05:37.955178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:15.090 [2024-12-08 06:05:37.955216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.090 [2024-12-08 06:05:37.955262] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:15.090 [2024-12-08 06:05:37.955450] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:15.090 [2024-12-08 06:05:37.955504] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:15.090 [2024-12-08 06:05:37.955526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:15.090 [2024-12-08 06:05:37.955543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:15.090 [2024-12-08 06:05:37.955559] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:15.090 [2024-12-08 06:05:37.955572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:15.090 [2024-12-08 06:05:37.955589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:15.090 [2024-12-08 06:05:37.955600] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:15.090 [2024-12-08 06:05:37.955613] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:15.090 [2024-12-08 06:05:37.955629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.090 [2024-12-08 06:05:37.955642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:15.090 [2024-12-08 06:05:37.955655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:18:15.090 [2024-12-08 06:05:37.955668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.090 [2024-12-08 06:05:37.955764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.090 [2024-12-08 06:05:37.955784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:15.090 [2024-12-08 06:05:37.955797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:15.090 [2024-12-08 06:05:37.955840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.090 [2024-12-08 06:05:37.955941] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:15.090 [2024-12-08 06:05:37.955963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:15.090 [2024-12-08 06:05:37.955974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.090 [2024-12-08 06:05:37.955998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:15.090 [2024-12-08 06:05:37.956031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:15.090 [2024-12-08 06:05:37.956064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.090 [2024-12-08 06:05:37.956086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:15.090 [2024-12-08 06:05:37.956098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:15.090 [2024-12-08 06:05:37.956108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.090 [2024-12-08 06:05:37.956122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:15.090 [2024-12-08 06:05:37.956133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:15.090 [2024-12-08 06:05:37.956145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:15.090 [2024-12-08 06:05:37.956166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:15.090 [2024-12-08 06:05:37.956233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:15.090 [2024-12-08 06:05:37.956267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:15.090 [2024-12-08 06:05:37.956299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:15.090 [2024-12-08 06:05:37.956337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:15.090 [2024-12-08 06:05:37.956384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.090 [2024-12-08 06:05:37.956420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:15.090 [2024-12-08 06:05:37.956431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:15.090 [2024-12-08 06:05:37.956441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.090 [2024-12-08 06:05:37.956453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:15.090 [2024-12-08 06:05:37.956463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:15.090 [2024-12-08 06:05:37.956474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:15.090 [2024-12-08 06:05:37.956497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:15.090 [2024-12-08 06:05:37.956507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956518] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:15.090 [2024-12-08 06:05:37.956539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:15.090 [2024-12-08 06:05:37.956555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.090 [2024-12-08 06:05:37.956579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:15.090 [2024-12-08 06:05:37.956589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:15.090 [2024-12-08 06:05:37.956601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:15.090 [2024-12-08 06:05:37.956611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:15.090 [2024-12-08 06:05:37.956622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:15.090 [2024-12-08 06:05:37.956632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:15.090 [2024-12-08 06:05:37.956650] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:15.090 [2024-12-08 06:05:37.956663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.090 [2024-12-08 06:05:37.956677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:15.090 [2024-12-08 06:05:37.956688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:15.090 [2024-12-08 06:05:37.956701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:15.090 [2024-12-08 06:05:37.956712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:15.090 [2024-12-08 06:05:37.956724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:15.090 [2024-12-08 06:05:37.956736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:15.090 [2024-12-08 06:05:37.956750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:15.090 [2024-12-08 06:05:37.956761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:15.090 [2024-12-08 06:05:37.956774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:15.090 [2024-12-08 06:05:37.956785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:15.090 [2024-12-08 06:05:37.956797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:15.090 [2024-12-08 06:05:37.956807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:15.090 [2024-12-08 06:05:37.956820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:15.090 [2024-12-08 06:05:37.956831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:15.090 [2024-12-08 06:05:37.956843] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:15.090 [2024-12-08 06:05:37.956855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.091 [2024-12-08 06:05:37.956872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:15.091 [2024-12-08 06:05:37.956883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:15.091 [2024-12-08 06:05:37.956896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:15.091 [2024-12-08 06:05:37.956907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:15.091 [2024-12-08 06:05:37.956921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.091 [2024-12-08 06:05:37.956932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:15.091 [2024-12-08 06:05:37.956949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:18:15.091 [2024-12-08 06:05:37.956960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.091 [2024-12-08 06:05:37.957014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:15.091 [2024-12-08 06:05:37.957030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:16.996 [2024-12-08 06:05:39.990071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.996 [2024-12-08 06:05:39.990161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:16.996 [2024-12-08 06:05:39.990235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2033.062 ms 00:18:16.996 [2024-12-08 06:05:39.990253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.996 [2024-12-08 06:05:39.998124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.996 [2024-12-08 06:05:39.998236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.996 [2024-12-08 06:05:39.998259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.756 ms 00:18:16.996 [2024-12-08 06:05:39.998270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.996 [2024-12-08 06:05:39.998387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.996 [2024-12-08 06:05:39.998402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.996 [2024-12-08 06:05:39.998416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:16.996 [2024-12-08 06:05:39.998431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.997 [2024-12-08 06:05:40.007205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.997 [2024-12-08 06:05:40.007342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.997 [2024-12-08 06:05:40.007372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.665 ms 00:18:16.997 [2024-12-08 06:05:40.007384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.997 [2024-12-08 06:05:40.007439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.997 [2024-12-08 06:05:40.007455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.997 [2024-12-08 06:05:40.007491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.997 [2024-12-08 06:05:40.007512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.997 [2024-12-08 06:05:40.007892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.997 [2024-12-08 06:05:40.007911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.997 [2024-12-08 06:05:40.007938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:18:16.997 [2024-12-08 06:05:40.007949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.997 [2024-12-08 06:05:40.008122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.997 [2024-12-08 06:05:40.008139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.997 [2024-12-08 06:05:40.008154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:18:16.997 [2024-12-08 06:05:40.008168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.997 [2024-12-08 06:05:40.025996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.997 [2024-12-08 06:05:40.026087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.997 [2024-12-08 06:05:40.026142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.790 ms 00:18:16.997 [2024-12-08 06:05:40.026160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.997 [2024-12-08 06:05:40.037565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:17.256 [2024-12-08 06:05:40.040606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.040656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.256 [2024-12-08 06:05:40.040691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.215 ms 00:18:17.256 [2024-12-08 06:05:40.040720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.089444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.089525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:17.256 [2024-12-08 06:05:40.089546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.666 ms 00:18:17.256 [2024-12-08 06:05:40.089563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.089798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.089819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.256 [2024-12-08 06:05:40.089833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:18:17.256 [2024-12-08 06:05:40.089845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.093626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.093733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:17.256 [2024-12-08 06:05:40.093765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:18:17.256 [2024-12-08 06:05:40.093779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.097051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.097125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:17.256 [2024-12-08 06:05:40.097142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:18:17.256 [2024-12-08 06:05:40.097154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.097529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.097561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.256 [2024-12-08 06:05:40.097575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:18:17.256 [2024-12-08 06:05:40.097597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.122364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.122453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:17.256 [2024-12-08 06:05:40.122474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.739 ms 00:18:17.256 [2024-12-08 06:05:40.122488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.126708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.126788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:17.256 [2024-12-08 06:05:40.126805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.160 ms 00:18:17.256 [2024-12-08 06:05:40.126818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.130509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.130585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:17.256 [2024-12-08 06:05:40.130616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.647 ms 00:18:17.256 [2024-12-08 06:05:40.130628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.134893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.134968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.256 [2024-12-08 06:05:40.134985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:18:17.256 [2024-12-08 06:05:40.135000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.135054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.135074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.256 [2024-12-08 06:05:40.135095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:17.256 [2024-12-08 06:05:40.135108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.135232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.256 [2024-12-08 06:05:40.135254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.256 [2024-12-08 06:05:40.135267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:17.256 [2024-12-08 06:05:40.135304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.256 [2024-12-08 06:05:40.136591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2190.188 ms, result 0 00:18:17.256 { 00:18:17.256 "name": "ftl0", 00:18:17.256 "uuid": "25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4" 00:18:17.256 } 00:18:17.256 06:05:40 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:17.256 06:05:40 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:17.515 06:05:40 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:17.515 06:05:40 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:17.778 [2024-12-08 06:05:40.746767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.746819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.778 [2024-12-08 06:05:40.746857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.778 [2024-12-08 06:05:40.746867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.746902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.778 [2024-12-08 06:05:40.747347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.747375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.778 [2024-12-08 06:05:40.747388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:18:17.778 [2024-12-08 06:05:40.747401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.747752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.747783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.778 [2024-12-08 06:05:40.747812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:18:17.778 [2024-12-08 06:05:40.747824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.751059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.751095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.778 [2024-12-08 06:05:40.751124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:18:17.778 [2024-12-08 06:05:40.751136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.757385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.757634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.778 [2024-12-08 06:05:40.757661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:18:17.778 [2024-12-08 06:05:40.757677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.759145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.759269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.778 [2024-12-08 06:05:40.759287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:18:17.778 [2024-12-08 06:05:40.759301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.763569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.763661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.778 [2024-12-08 06:05:40.763679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.224 ms 00:18:17.778 [2024-12-08 06:05:40.763693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.763855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.763877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.778 [2024-12-08 06:05:40.763890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:17.778 [2024-12-08 06:05:40.763902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.765676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.765763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.778 [2024-12-08 06:05:40.765778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:18:17.778 [2024-12-08 06:05:40.765791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.767371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.767433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.778 [2024-12-08 06:05:40.767449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:18:17.778 [2024-12-08 06:05:40.767462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.768818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.769048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.778 [2024-12-08 06:05:40.769073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:18:17.778 [2024-12-08 06:05:40.769087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.770517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.778 [2024-12-08 06:05:40.770578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.778 [2024-12-08 06:05:40.770618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:18:17.778 [2024-12-08 06:05:40.770632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.778 [2024-12-08 06:05:40.770673] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.778 [2024-12-08 06:05:40.770702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.770990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.778 [2024-12-08 06:05:40.771101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.771997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.779 [2024-12-08 06:05:40.772153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.779 [2024-12-08 06:05:40.772165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4 00:18:17.779 [2024-12-08 06:05:40.772178] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.779 [2024-12-08 06:05:40.772203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.779 [2024-12-08 06:05:40.772215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.779 [2024-12-08 06:05:40.772226] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.779 [2024-12-08 06:05:40.772238] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.779 [2024-12-08 06:05:40.772249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.779 [2024-12-08 06:05:40.772275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.779 [2024-12-08 06:05:40.772286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.779 [2024-12-08 06:05:40.772297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.779 [2024-12-08 06:05:40.772308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.779 [2024-12-08 06:05:40.772321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.779 [2024-12-08 06:05:40.772335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:18:17.779 [2024-12-08 06:05:40.772347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.779 [2024-12-08 06:05:40.773716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.779 [2024-12-08 06:05:40.773750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.779 [2024-12-08 06:05:40.773763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:18:17.779 [2024-12-08 06:05:40.773775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.779 [2024-12-08 06:05:40.773848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.779 [2024-12-08 06:05:40.773869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.779 [2024-12-08 06:05:40.773881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:17.779 [2024-12-08 06:05:40.773892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.779 [2024-12-08 06:05:40.779181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.779282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.780 [2024-12-08 06:05:40.779326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.779353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.779432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.779453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.780 [2024-12-08 06:05:40.779466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.779516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.779632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.779660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.780 [2024-12-08 06:05:40.779673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.779686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.779713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.779730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.780 [2024-12-08 06:05:40.779747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.779760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.788130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.788238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.780 [2024-12-08 06:05:40.788264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.788278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.796074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.796146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.780 [2024-12-08 06:05:40.796163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.796178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.796299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.796325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.780 [2024-12-08 06:05:40.796337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.796350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.796423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.796443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.780 [2024-12-08 06:05:40.796471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.796487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.796578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.796601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.780 [2024-12-08 06:05:40.796614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.796626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.796684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.796838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.780 [2024-12-08 06:05:40.796864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.796878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.796935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.796971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.780 [2024-12-08 06:05:40.796983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.797006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.797057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.780 [2024-12-08 06:05:40.797077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.780 [2024-12-08 06:05:40.797088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.780 [2024-12-08 06:05:40.797103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.780 [2024-12-08 06:05:40.797264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.444 ms, result 0 00:18:17.780 true 00:18:18.040 06:05:40 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87007 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87007 ']' 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87007 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87007 00:18:18.040 killing process with pid 87007 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87007' 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 87007 00:18:18.040 06:05:40 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 87007 00:18:21.331 06:05:43 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:25.514 262144+0 records in 00:18:25.514 262144+0 records out 00:18:25.514 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.5331 s, 237 MB/s 00:18:25.514 06:05:48 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:27.437 06:05:50 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.437 [2024-12-08 06:05:50.308333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:27.437 [2024-12-08 06:05:50.308518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87221 ] 00:18:27.437 [2024-12-08 06:05:50.458600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.696 [2024-12-08 06:05:50.502475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.696 [2024-12-08 06:05:50.595770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.696 [2024-12-08 06:05:50.595878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.956 [2024-12-08 06:05:50.754851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.754902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:27.956 [2024-12-08 06:05:50.754924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:27.956 [2024-12-08 06:05:50.754942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.755009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.755028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.956 [2024-12-08 06:05:50.755039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:27.956 [2024-12-08 06:05:50.755060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.755088] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:27.956 [2024-12-08 06:05:50.755403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:27.956 [2024-12-08 06:05:50.755431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.755442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.956 [2024-12-08 06:05:50.755457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:18:27.956 [2024-12-08 06:05:50.755493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.756675] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:27.956 [2024-12-08 06:05:50.758959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.758999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:27.956 [2024-12-08 06:05:50.759031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.287 ms 00:18:27.956 [2024-12-08 06:05:50.759041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.759110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.759132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:27.956 [2024-12-08 06:05:50.759143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:27.956 [2024-12-08 06:05:50.759155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.763688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.763735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.956 [2024-12-08 06:05:50.763751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:18:27.956 [2024-12-08 06:05:50.763761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.763885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.763904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.956 [2024-12-08 06:05:50.763915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:27.956 [2024-12-08 06:05:50.763933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.764012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.764035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:27.956 [2024-12-08 06:05:50.764048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:27.956 [2024-12-08 06:05:50.764057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.764091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:27.956 [2024-12-08 06:05:50.765419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.765676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.956 [2024-12-08 06:05:50.765732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:18:27.956 [2024-12-08 06:05:50.765756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.765807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.765835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:27.956 [2024-12-08 06:05:50.765855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:27.956 [2024-12-08 06:05:50.765865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.765895] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:27.956 [2024-12-08 06:05:50.765926] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:27.956 [2024-12-08 06:05:50.765997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:27.956 [2024-12-08 06:05:50.766018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:27.956 [2024-12-08 06:05:50.766119] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:27.956 [2024-12-08 06:05:50.766133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:27.956 [2024-12-08 06:05:50.766146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:27.956 [2024-12-08 06:05:50.766168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:27.956 [2024-12-08 06:05:50.766184] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:27.956 [2024-12-08 06:05:50.766195] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:27.956 [2024-12-08 06:05:50.766220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:27.956 [2024-12-08 06:05:50.766257] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:27.956 [2024-12-08 06:05:50.766267] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:27.956 [2024-12-08 06:05:50.766286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.766297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:27.956 [2024-12-08 06:05:50.766309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:18:27.956 [2024-12-08 06:05:50.766329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.766418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.956 [2024-12-08 06:05:50.766437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:27.956 [2024-12-08 06:05:50.766451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:27.956 [2024-12-08 06:05:50.766461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.956 [2024-12-08 06:05:50.766590] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:27.956 [2024-12-08 06:05:50.766607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:27.956 [2024-12-08 06:05:50.766618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.956 [2024-12-08 06:05:50.766628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.956 [2024-12-08 06:05:50.766638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:27.956 [2024-12-08 06:05:50.766647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:27.957 [2024-12-08 06:05:50.766666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:27.957 [2024-12-08 06:05:50.766677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.957 [2024-12-08 06:05:50.766695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:27.957 [2024-12-08 06:05:50.766704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:27.957 [2024-12-08 06:05:50.766713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.957 [2024-12-08 06:05:50.766727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:27.957 [2024-12-08 06:05:50.766737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:27.957 [2024-12-08 06:05:50.766746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:27.957 [2024-12-08 06:05:50.766765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:27.957 [2024-12-08 06:05:50.766774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:27.957 [2024-12-08 06:05:50.766792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.957 [2024-12-08 06:05:50.766811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:27.957 [2024-12-08 06:05:50.766820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.957 [2024-12-08 06:05:50.766838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:27.957 [2024-12-08 06:05:50.766847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.957 [2024-12-08 06:05:50.766864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:27.957 [2024-12-08 06:05:50.766879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.957 [2024-12-08 06:05:50.766897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:27.957 [2024-12-08 06:05:50.766906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.957 [2024-12-08 06:05:50.766924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:27.957 [2024-12-08 06:05:50.766933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:27.957 [2024-12-08 06:05:50.766942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.957 [2024-12-08 06:05:50.766952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:27.957 [2024-12-08 06:05:50.766961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:27.957 [2024-12-08 06:05:50.766970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.957 [2024-12-08 06:05:50.766979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:27.957 [2024-12-08 06:05:50.766988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:27.957 [2024-12-08 06:05:50.766997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.957 [2024-12-08 06:05:50.767006] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:27.957 [2024-12-08 06:05:50.767016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:27.957 [2024-12-08 06:05:50.767028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.957 [2024-12-08 06:05:50.767049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.957 [2024-12-08 06:05:50.767060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:27.957 [2024-12-08 06:05:50.767070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:27.957 [2024-12-08 06:05:50.767079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:27.957 [2024-12-08 06:05:50.767089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:27.957 [2024-12-08 06:05:50.767098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:27.957 [2024-12-08 06:05:50.767107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:27.957 [2024-12-08 06:05:50.767117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:27.957 [2024-12-08 06:05:50.767129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:27.957 [2024-12-08 06:05:50.767151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:27.957 [2024-12-08 06:05:50.767161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:27.957 [2024-12-08 06:05:50.767171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:27.957 [2024-12-08 06:05:50.767181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:27.957 [2024-12-08 06:05:50.767191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:27.957 [2024-12-08 06:05:50.767220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:27.957 [2024-12-08 06:05:50.767232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:27.957 [2024-12-08 06:05:50.767242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:27.957 [2024-12-08 06:05:50.767262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:27.957 [2024-12-08 06:05:50.767313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:27.957 [2024-12-08 06:05:50.767324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:27.957 [2024-12-08 06:05:50.767347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:27.957 [2024-12-08 06:05:50.767357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:27.957 [2024-12-08 06:05:50.767367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:27.957 [2024-12-08 06:05:50.767378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.957 [2024-12-08 06:05:50.767388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:27.957 [2024-12-08 06:05:50.767410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:18:27.957 [2024-12-08 06:05:50.767426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.957 [2024-12-08 06:05:50.791618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.957 [2024-12-08 06:05:50.791899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.957 [2024-12-08 06:05:50.792162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.087 ms 00:18:27.957 [2024-12-08 06:05:50.792432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.957 [2024-12-08 06:05:50.792816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.957 [2024-12-08 06:05:50.793020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:27.957 [2024-12-08 06:05:50.793286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:27.957 [2024-12-08 06:05:50.793485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.957 [2024-12-08 06:05:50.801631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.957 [2024-12-08 06:05:50.801842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.957 [2024-12-08 06:05:50.801992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.787 ms 00:18:27.958 [2024-12-08 06:05:50.802149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.802442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.802614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.958 [2024-12-08 06:05:50.802768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:27.958 [2024-12-08 06:05:50.802910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.803584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.803811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.958 [2024-12-08 06:05:50.803955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:18:27.958 [2024-12-08 06:05:50.804108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.804446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.804589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.958 [2024-12-08 06:05:50.804762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:18:27.958 [2024-12-08 06:05:50.804933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.809784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.809984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.958 [2024-12-08 06:05:50.810152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:18:27.958 [2024-12-08 06:05:50.810205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.812587] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:27.958 [2024-12-08 06:05:50.812666] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:27.958 [2024-12-08 06:05:50.812699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.812713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:27.958 [2024-12-08 06:05:50.812725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.318 ms 00:18:27.958 [2024-12-08 06:05:50.812734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.826565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.826605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:27.958 [2024-12-08 06:05:50.826642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.788 ms 00:18:27.958 [2024-12-08 06:05:50.826663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.828577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.828618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:27.958 [2024-12-08 06:05:50.828664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:18:27.958 [2024-12-08 06:05:50.828674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.830344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.830383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:27.958 [2024-12-08 06:05:50.830413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:18:27.958 [2024-12-08 06:05:50.830423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.830789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.830809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:27.958 [2024-12-08 06:05:50.830821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:18:27.958 [2024-12-08 06:05:50.830830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.846476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.846556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:27.958 [2024-12-08 06:05:50.846596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.624 ms 00:18:27.958 [2024-12-08 06:05:50.846609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.854261] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:27.958 [2024-12-08 06:05:50.856600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.856636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:27.958 [2024-12-08 06:05:50.856667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.930 ms 00:18:27.958 [2024-12-08 06:05:50.856677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.856752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.856773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:27.958 [2024-12-08 06:05:50.856786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:27.958 [2024-12-08 06:05:50.856796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.856927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.856948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:27.958 [2024-12-08 06:05:50.856959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:27.958 [2024-12-08 06:05:50.856968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.857009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.857033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:27.958 [2024-12-08 06:05:50.857044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:27.958 [2024-12-08 06:05:50.857053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.857100] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:27.958 [2024-12-08 06:05:50.857116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.857125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:27.958 [2024-12-08 06:05:50.857139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:27.958 [2024-12-08 06:05:50.857148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.860569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.860615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:27.958 [2024-12-08 06:05:50.860646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:18:27.958 [2024-12-08 06:05:50.860657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.860735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.958 [2024-12-08 06:05:50.860753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:27.958 [2024-12-08 06:05:50.860774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:27.958 [2024-12-08 06:05:50.860784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.958 [2024-12-08 06:05:50.862083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.635 ms, result 0 00:18:28.894  [2024-12-08T06:05:52.875Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-08T06:05:54.253Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-08T06:05:55.194Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-08T06:05:56.127Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-08T06:05:57.064Z] Copying: 115/1024 [MB] (22 MBps) [2024-12-08T06:05:58.001Z] Copying: 138/1024 [MB] (22 MBps) [2024-12-08T06:05:58.938Z] Copying: 161/1024 [MB] (22 MBps) [2024-12-08T06:05:59.875Z] Copying: 183/1024 [MB] (22 MBps) [2024-12-08T06:06:01.276Z] Copying: 206/1024 [MB] (23 MBps) [2024-12-08T06:06:02.209Z] Copying: 230/1024 [MB] (23 MBps) [2024-12-08T06:06:03.141Z] Copying: 252/1024 [MB] (22 MBps) [2024-12-08T06:06:04.077Z] Copying: 276/1024 [MB] (23 MBps) [2024-12-08T06:06:05.029Z] Copying: 300/1024 [MB] (23 MBps) [2024-12-08T06:06:05.970Z] Copying: 323/1024 [MB] (23 MBps) [2024-12-08T06:06:06.900Z] Copying: 346/1024 [MB] (23 MBps) [2024-12-08T06:06:08.272Z] Copying: 369/1024 [MB] (22 MBps) [2024-12-08T06:06:09.208Z] Copying: 392/1024 [MB] (23 MBps) [2024-12-08T06:06:09.881Z] Copying: 416/1024 [MB] (23 MBps) [2024-12-08T06:06:11.260Z] Copying: 439/1024 [MB] (23 MBps) [2024-12-08T06:06:12.198Z] Copying: 463/1024 [MB] (23 MBps) [2024-12-08T06:06:13.130Z] Copying: 486/1024 [MB] (23 MBps) [2024-12-08T06:06:14.063Z] Copying: 510/1024 [MB] (23 MBps) [2024-12-08T06:06:14.997Z] Copying: 533/1024 [MB] (23 MBps) [2024-12-08T06:06:15.932Z] Copying: 556/1024 [MB] (23 MBps) [2024-12-08T06:06:17.308Z] Copying: 579/1024 [MB] (23 MBps) [2024-12-08T06:06:17.876Z] Copying: 603/1024 [MB] (23 MBps) [2024-12-08T06:06:19.255Z] Copying: 626/1024 [MB] (23 MBps) [2024-12-08T06:06:20.193Z] Copying: 648/1024 [MB] (22 MBps) [2024-12-08T06:06:21.127Z] Copying: 672/1024 [MB] (23 MBps) [2024-12-08T06:06:22.061Z] Copying: 695/1024 [MB] (23 MBps) [2024-12-08T06:06:22.996Z] Copying: 717/1024 [MB] (22 MBps) [2024-12-08T06:06:23.931Z] Copying: 740/1024 [MB] (22 MBps) [2024-12-08T06:06:24.883Z] Copying: 763/1024 [MB] (23 MBps) [2024-12-08T06:06:26.258Z] Copying: 786/1024 [MB] (23 MBps) [2024-12-08T06:06:27.193Z] Copying: 809/1024 [MB] (22 MBps) [2024-12-08T06:06:28.130Z] Copying: 833/1024 [MB] (23 MBps) [2024-12-08T06:06:29.095Z] Copying: 856/1024 [MB] (23 MBps) [2024-12-08T06:06:30.032Z] Copying: 879/1024 [MB] (23 MBps) [2024-12-08T06:06:30.967Z] Copying: 903/1024 [MB] (23 MBps) [2024-12-08T06:06:31.903Z] Copying: 927/1024 [MB] (23 MBps) [2024-12-08T06:06:33.285Z] Copying: 950/1024 [MB] (23 MBps) [2024-12-08T06:06:34.218Z] Copying: 973/1024 [MB] (23 MBps) [2024-12-08T06:06:35.155Z] Copying: 997/1024 [MB] (23 MBps) [2024-12-08T06:06:35.155Z] Copying: 1020/1024 [MB] (23 MBps) [2024-12-08T06:06:35.155Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:06:35.011004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.110 [2024-12-08 06:06:35.011053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.110 [2024-12-08 06:06:35.011087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.110 [2024-12-08 06:06:35.011098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.110 [2024-12-08 06:06:35.011126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.110 [2024-12-08 06:06:35.011699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.110 [2024-12-08 06:06:35.011738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.111 [2024-12-08 06:06:35.011753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:12.111 [2024-12-08 06:06:35.011764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.013340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.013374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.111 [2024-12-08 06:06:35.013390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:19:12.111 [2024-12-08 06:06:35.013401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.028677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.028716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.111 [2024-12-08 06:06:35.028755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.253 ms 00:19:12.111 [2024-12-08 06:06:35.028765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.034697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.034728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.111 [2024-12-08 06:06:35.034757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:19:12.111 [2024-12-08 06:06:35.034778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.036077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.036120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.111 [2024-12-08 06:06:35.036136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:19:12.111 [2024-12-08 06:06:35.036148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.039386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.039433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.111 [2024-12-08 06:06:35.039474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:19:12.111 [2024-12-08 06:06:35.039488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.039611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.039630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.111 [2024-12-08 06:06:35.039643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:12.111 [2024-12-08 06:06:35.039653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.041781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.041821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.111 [2024-12-08 06:06:35.041851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:19:12.111 [2024-12-08 06:06:35.041861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.043225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.043305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.111 [2024-12-08 06:06:35.043342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:19:12.111 [2024-12-08 06:06:35.043351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.044768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.044963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.111 [2024-12-08 06:06:35.045003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:19:12.111 [2024-12-08 06:06:35.045014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.046440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.111 [2024-12-08 06:06:35.046491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.111 [2024-12-08 06:06:35.046520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:19:12.111 [2024-12-08 06:06:35.046530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.111 [2024-12-08 06:06:35.046565] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.111 [2024-12-08 06:06:35.046586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.046999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.111 [2024-12-08 06:06:35.047142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.112 [2024-12-08 06:06:35.047776] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.112 [2024-12-08 06:06:35.047807] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4 00:19:12.112 [2024-12-08 06:06:35.047818] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.112 [2024-12-08 06:06:35.047828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.112 [2024-12-08 06:06:35.047838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.112 [2024-12-08 06:06:35.047848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.112 [2024-12-08 06:06:35.047857] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.112 [2024-12-08 06:06:35.047868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.112 [2024-12-08 06:06:35.047892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.112 [2024-12-08 06:06:35.047901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.112 [2024-12-08 06:06:35.047910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.112 [2024-12-08 06:06:35.047920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.112 [2024-12-08 06:06:35.047940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.112 [2024-12-08 06:06:35.047951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:19:12.112 [2024-12-08 06:06:35.047965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.049411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.112 [2024-12-08 06:06:35.049440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.112 [2024-12-08 06:06:35.049453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:19:12.112 [2024-12-08 06:06:35.049463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.049536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.112 [2024-12-08 06:06:35.049552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.112 [2024-12-08 06:06:35.049584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:12.112 [2024-12-08 06:06:35.049594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.054024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.054063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.112 [2024-12-08 06:06:35.054093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.054102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.054153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.054167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.112 [2024-12-08 06:06:35.054183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.054192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.054277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.054295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.112 [2024-12-08 06:06:35.054306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.054316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.054345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.054358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.112 [2024-12-08 06:06:35.054368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.054377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.062478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.062531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.112 [2024-12-08 06:06:35.062562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.062572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.068976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.069022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.112 [2024-12-08 06:06:35.069053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.069073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.069140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.069157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.112 [2024-12-08 06:06:35.069167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.112 [2024-12-08 06:06:35.069176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.112 [2024-12-08 06:06:35.069533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.112 [2024-12-08 06:06:35.069607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.113 [2024-12-08 06:06:35.069765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.113 [2024-12-08 06:06:35.069789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.113 [2024-12-08 06:06:35.069909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.113 [2024-12-08 06:06:35.069950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.113 [2024-12-08 06:06:35.069964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.113 [2024-12-08 06:06:35.069974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.113 [2024-12-08 06:06:35.070022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.113 [2024-12-08 06:06:35.070040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.113 [2024-12-08 06:06:35.070065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.113 [2024-12-08 06:06:35.070075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.113 [2024-12-08 06:06:35.070121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.113 [2024-12-08 06:06:35.070137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.113 [2024-12-08 06:06:35.070147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.113 [2024-12-08 06:06:35.070156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.113 [2024-12-08 06:06:35.070234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.113 [2024-12-08 06:06:35.070283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.113 [2024-12-08 06:06:35.070293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.113 [2024-12-08 06:06:35.070303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.113 [2024-12-08 06:06:35.070460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.434 ms, result 0 00:19:12.371 00:19:12.371 00:19:12.371 06:06:35 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:12.629 [2024-12-08 06:06:35.463489] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:12.629 [2024-12-08 06:06:35.463669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87679 ] 00:19:12.629 [2024-12-08 06:06:35.610771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.629 [2024-12-08 06:06:35.649128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.889 [2024-12-08 06:06:35.733942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.889 [2024-12-08 06:06:35.734041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.889 [2024-12-08 06:06:35.890847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.890903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.889 [2024-12-08 06:06:35.890949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.889 [2024-12-08 06:06:35.890959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.891018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.891041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.889 [2024-12-08 06:06:35.891052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:12.889 [2024-12-08 06:06:35.891073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.891107] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.889 [2024-12-08 06:06:35.891442] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.889 [2024-12-08 06:06:35.891496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.891519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.889 [2024-12-08 06:06:35.891536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:19:12.889 [2024-12-08 06:06:35.891548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.892811] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.889 [2024-12-08 06:06:35.895001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.895039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.889 [2024-12-08 06:06:35.895070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:19:12.889 [2024-12-08 06:06:35.895079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.895144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.895164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.889 [2024-12-08 06:06:35.895175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:12.889 [2024-12-08 06:06:35.895503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.900046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.900082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.889 [2024-12-08 06:06:35.900112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.445 ms 00:19:12.889 [2024-12-08 06:06:35.900121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.900274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.900295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.889 [2024-12-08 06:06:35.900315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:12.889 [2024-12-08 06:06:35.900324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.900388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.900409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.889 [2024-12-08 06:06:35.900422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:12.889 [2024-12-08 06:06:35.900431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.900460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.889 [2024-12-08 06:06:35.901804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.902007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.889 [2024-12-08 06:06:35.902050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:19:12.889 [2024-12-08 06:06:35.902061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.902100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.902114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.889 [2024-12-08 06:06:35.902138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.889 [2024-12-08 06:06:35.902148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.902177] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.889 [2024-12-08 06:06:35.902256] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:12.889 [2024-12-08 06:06:35.902310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.889 [2024-12-08 06:06:35.902355] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:12.889 [2024-12-08 06:06:35.902482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:12.889 [2024-12-08 06:06:35.902497] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.889 [2024-12-08 06:06:35.902510] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:12.889 [2024-12-08 06:06:35.902532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.889 [2024-12-08 06:06:35.902555] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.889 [2024-12-08 06:06:35.902588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:12.889 [2024-12-08 06:06:35.902604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.889 [2024-12-08 06:06:35.902628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:12.889 [2024-12-08 06:06:35.902637] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:12.889 [2024-12-08 06:06:35.902647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.902656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.889 [2024-12-08 06:06:35.902666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:19:12.889 [2024-12-08 06:06:35.902676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.902755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.889 [2024-12-08 06:06:35.902781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.889 [2024-12-08 06:06:35.902794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:12.889 [2024-12-08 06:06:35.902804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.889 [2024-12-08 06:06:35.902900] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.889 [2024-12-08 06:06:35.902916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.889 [2024-12-08 06:06:35.902926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.889 [2024-12-08 06:06:35.902936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.889 [2024-12-08 06:06:35.902946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.889 [2024-12-08 06:06:35.902954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.889 [2024-12-08 06:06:35.902963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:12.889 [2024-12-08 06:06:35.902971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.889 [2024-12-08 06:06:35.902981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:12.889 [2024-12-08 06:06:35.902990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.889 [2024-12-08 06:06:35.902999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.889 [2024-12-08 06:06:35.903007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:12.889 [2024-12-08 06:06:35.903015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.889 [2024-12-08 06:06:35.903028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.889 [2024-12-08 06:06:35.903037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:12.890 [2024-12-08 06:06:35.903046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.890 [2024-12-08 06:06:35.903064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.890 [2024-12-08 06:06:35.903090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.890 [2024-12-08 06:06:35.903116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.890 [2024-12-08 06:06:35.903141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.890 [2024-12-08 06:06:35.903171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.890 [2024-12-08 06:06:35.903198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.890 [2024-12-08 06:06:35.903230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.890 [2024-12-08 06:06:35.903239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:12.890 [2024-12-08 06:06:35.903247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.890 [2024-12-08 06:06:35.903290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:12.890 [2024-12-08 06:06:35.903300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:12.890 [2024-12-08 06:06:35.903309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:12.890 [2024-12-08 06:06:35.903327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:12.890 [2024-12-08 06:06:35.903336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.890 [2024-12-08 06:06:35.903355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.890 [2024-12-08 06:06:35.903368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.890 [2024-12-08 06:06:35.903390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.890 [2024-12-08 06:06:35.903400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.890 [2024-12-08 06:06:35.903409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.890 [2024-12-08 06:06:35.903418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.890 [2024-12-08 06:06:35.903427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.890 [2024-12-08 06:06:35.903436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.890 [2024-12-08 06:06:35.903447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.890 [2024-12-08 06:06:35.903459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:12.890 [2024-12-08 06:06:35.903511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:12.890 [2024-12-08 06:06:35.903522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:12.890 [2024-12-08 06:06:35.903533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:12.890 [2024-12-08 06:06:35.903544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:12.890 [2024-12-08 06:06:35.903555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:12.890 [2024-12-08 06:06:35.903569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:12.890 [2024-12-08 06:06:35.903581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:12.890 [2024-12-08 06:06:35.903592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:12.890 [2024-12-08 06:06:35.903615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:12.890 [2024-12-08 06:06:35.903670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.890 [2024-12-08 06:06:35.903683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.890 [2024-12-08 06:06:35.903707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.890 [2024-12-08 06:06:35.903718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.890 [2024-12-08 06:06:35.903729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.890 [2024-12-08 06:06:35.903741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.890 [2024-12-08 06:06:35.903752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.890 [2024-12-08 06:06:35.903767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:19:12.890 [2024-12-08 06:06:35.903778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.890 [2024-12-08 06:06:35.926378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.890 [2024-12-08 06:06:35.926634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.890 [2024-12-08 06:06:35.926664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.489 ms 00:19:12.890 [2024-12-08 06:06:35.926684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.890 [2024-12-08 06:06:35.926801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.890 [2024-12-08 06:06:35.926818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.890 [2024-12-08 06:06:35.926830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:12.890 [2024-12-08 06:06:35.926840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.148 [2024-12-08 06:06:35.936797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.148 [2024-12-08 06:06:35.936856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.148 [2024-12-08 06:06:35.936878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.862 ms 00:19:13.148 [2024-12-08 06:06:35.936892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.148 [2024-12-08 06:06:35.936967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.148 [2024-12-08 06:06:35.936986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.148 [2024-12-08 06:06:35.937001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.148 [2024-12-08 06:06:35.937019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.148 [2024-12-08 06:06:35.937554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.148 [2024-12-08 06:06:35.937588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.148 [2024-12-08 06:06:35.937613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:13.149 [2024-12-08 06:06:35.937626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.937832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.937862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.149 [2024-12-08 06:06:35.937877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:19:13.149 [2024-12-08 06:06:35.937902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.942987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.943028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.149 [2024-12-08 06:06:35.943048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:19:13.149 [2024-12-08 06:06:35.943057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.945484] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:13.149 [2024-12-08 06:06:35.945528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:13.149 [2024-12-08 06:06:35.945584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.945631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:13.149 [2024-12-08 06:06:35.945657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:19:13.149 [2024-12-08 06:06:35.945667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.961007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.961057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:13.149 [2024-12-08 06:06:35.961101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.276 ms 00:19:13.149 [2024-12-08 06:06:35.961123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.963024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.963062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:13.149 [2024-12-08 06:06:35.963093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:19:13.149 [2024-12-08 06:06:35.963102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.964840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.964878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:13.149 [2024-12-08 06:06:35.964907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:19:13.149 [2024-12-08 06:06:35.964917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.965294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.965323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.149 [2024-12-08 06:06:35.965337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:13.149 [2024-12-08 06:06:35.965347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.981427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.981510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:13.149 [2024-12-08 06:06:35.981552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.057 ms 00:19:13.149 [2024-12-08 06:06:35.981575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.989315] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:13.149 [2024-12-08 06:06:35.991786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.991868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.149 [2024-12-08 06:06:35.991888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.133 ms 00:19:13.149 [2024-12-08 06:06:35.991899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.991967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.991992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:13.149 [2024-12-08 06:06:35.992008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:13.149 [2024-12-08 06:06:35.992019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.992148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.992168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.149 [2024-12-08 06:06:35.992199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:13.149 [2024-12-08 06:06:35.992235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.992286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.992318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.149 [2024-12-08 06:06:35.992330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:13.149 [2024-12-08 06:06:35.992340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.992388] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:13.149 [2024-12-08 06:06:35.992404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.992414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:13.149 [2024-12-08 06:06:35.992424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:13.149 [2024-12-08 06:06:35.992443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.995960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.996000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:13.149 [2024-12-08 06:06:35.996042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.490 ms 00:19:13.149 [2024-12-08 06:06:35.996053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.996135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.149 [2024-12-08 06:06:35.996160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:13.149 [2024-12-08 06:06:35.996171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:13.149 [2024-12-08 06:06:35.996215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.149 [2024-12-08 06:06:35.997596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.187 ms, result 0 00:19:14.526  [2024-12-08T06:06:38.507Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-08T06:06:39.444Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-08T06:06:40.381Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-08T06:06:41.356Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-08T06:06:42.291Z] Copying: 118/1024 [MB] (23 MBps) [2024-12-08T06:06:43.259Z] Copying: 141/1024 [MB] (23 MBps) [2024-12-08T06:06:44.208Z] Copying: 163/1024 [MB] (21 MBps) [2024-12-08T06:06:45.582Z] Copying: 187/1024 [MB] (24 MBps) [2024-12-08T06:06:46.516Z] Copying: 210/1024 [MB] (23 MBps) [2024-12-08T06:06:47.451Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-08T06:06:48.388Z] Copying: 257/1024 [MB] (23 MBps) [2024-12-08T06:06:49.324Z] Copying: 280/1024 [MB] (23 MBps) [2024-12-08T06:06:50.262Z] Copying: 304/1024 [MB] (23 MBps) [2024-12-08T06:06:51.200Z] Copying: 327/1024 [MB] (23 MBps) [2024-12-08T06:06:52.577Z] Copying: 350/1024 [MB] (23 MBps) [2024-12-08T06:06:53.514Z] Copying: 374/1024 [MB] (23 MBps) [2024-12-08T06:06:54.450Z] Copying: 396/1024 [MB] (22 MBps) [2024-12-08T06:06:55.385Z] Copying: 420/1024 [MB] (23 MBps) [2024-12-08T06:06:56.316Z] Copying: 444/1024 [MB] (23 MBps) [2024-12-08T06:06:57.280Z] Copying: 467/1024 [MB] (23 MBps) [2024-12-08T06:06:58.213Z] Copying: 491/1024 [MB] (23 MBps) [2024-12-08T06:06:59.588Z] Copying: 514/1024 [MB] (23 MBps) [2024-12-08T06:07:00.523Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-08T06:07:01.457Z] Copying: 561/1024 [MB] (23 MBps) [2024-12-08T06:07:02.391Z] Copying: 584/1024 [MB] (23 MBps) [2024-12-08T06:07:03.326Z] Copying: 607/1024 [MB] (22 MBps) [2024-12-08T06:07:04.262Z] Copying: 630/1024 [MB] (23 MBps) [2024-12-08T06:07:05.198Z] Copying: 654/1024 [MB] (24 MBps) [2024-12-08T06:07:06.573Z] Copying: 678/1024 [MB] (23 MBps) [2024-12-08T06:07:07.507Z] Copying: 702/1024 [MB] (23 MBps) [2024-12-08T06:07:08.440Z] Copying: 726/1024 [MB] (24 MBps) [2024-12-08T06:07:09.376Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-08T06:07:10.313Z] Copying: 773/1024 [MB] (23 MBps) [2024-12-08T06:07:11.260Z] Copying: 797/1024 [MB] (23 MBps) [2024-12-08T06:07:12.196Z] Copying: 821/1024 [MB] (24 MBps) [2024-12-08T06:07:13.574Z] Copying: 845/1024 [MB] (23 MBps) [2024-12-08T06:07:14.511Z] Copying: 868/1024 [MB] (23 MBps) [2024-12-08T06:07:15.449Z] Copying: 892/1024 [MB] (23 MBps) [2024-12-08T06:07:16.382Z] Copying: 916/1024 [MB] (23 MBps) [2024-12-08T06:07:17.317Z] Copying: 939/1024 [MB] (23 MBps) [2024-12-08T06:07:18.252Z] Copying: 963/1024 [MB] (23 MBps) [2024-12-08T06:07:19.624Z] Copying: 986/1024 [MB] (23 MBps) [2024-12-08T06:07:19.882Z] Copying: 1010/1024 [MB] (23 MBps) [2024-12-08T06:07:20.142Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:07:19.967745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.967876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.097 [2024-12-08 06:07:19.967920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:57.097 [2024-12-08 06:07:19.967933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.967972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.097 [2024-12-08 06:07:19.969532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.969581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.097 [2024-12-08 06:07:19.969595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:19:57.097 [2024-12-08 06:07:19.969607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.969860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.969878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.097 [2024-12-08 06:07:19.969889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:19:57.097 [2024-12-08 06:07:19.969899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.973943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.973981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.097 [2024-12-08 06:07:19.974011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:19:57.097 [2024-12-08 06:07:19.974021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.980379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.980413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.097 [2024-12-08 06:07:19.980443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.333 ms 00:19:57.097 [2024-12-08 06:07:19.980453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.982010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.982068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.097 [2024-12-08 06:07:19.982099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:19:57.097 [2024-12-08 06:07:19.982109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.985056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.985099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.097 [2024-12-08 06:07:19.985115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:19:57.097 [2024-12-08 06:07:19.985125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.985271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.985292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.097 [2024-12-08 06:07:19.985305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:57.097 [2024-12-08 06:07:19.985315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.987639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.987682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.097 [2024-12-08 06:07:19.987698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:19:57.097 [2024-12-08 06:07:19.987709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.097 [2024-12-08 06:07:19.989311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.097 [2024-12-08 06:07:19.989363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.098 [2024-12-08 06:07:19.989388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:19:57.098 [2024-12-08 06:07:19.989398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.098 [2024-12-08 06:07:19.990675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.098 [2024-12-08 06:07:19.990744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.098 [2024-12-08 06:07:19.990766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:19:57.098 [2024-12-08 06:07:19.990775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.098 [2024-12-08 06:07:19.992196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.098 [2024-12-08 06:07:19.992243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.098 [2024-12-08 06:07:19.992274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:19:57.098 [2024-12-08 06:07:19.992286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.098 [2024-12-08 06:07:19.992323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.098 [2024-12-08 06:07:19.992350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.992994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.098 [2024-12-08 06:07:19.993151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.993161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.993171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.993487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.993694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.993948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.099 [2024-12-08 06:07:19.994841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.099 [2024-12-08 06:07:19.994867] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4 00:19:57.099 [2024-12-08 06:07:19.994877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.099 [2024-12-08 06:07:19.994900] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.099 [2024-12-08 06:07:19.994909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.099 [2024-12-08 06:07:19.994919] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.099 [2024-12-08 06:07:19.994928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.099 [2024-12-08 06:07:19.994938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.099 [2024-12-08 06:07:19.994947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.099 [2024-12-08 06:07:19.994955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.099 [2024-12-08 06:07:19.994964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.099 [2024-12-08 06:07:19.994974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.099 [2024-12-08 06:07:19.994990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.099 [2024-12-08 06:07:19.995004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:19:57.099 [2024-12-08 06:07:19.995014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:19.996558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.099 [2024-12-08 06:07:19.996591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.099 [2024-12-08 06:07:19.996605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:19:57.099 [2024-12-08 06:07:19.996616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:19.996699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.099 [2024-12-08 06:07:19.996721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.099 [2024-12-08 06:07:19.996733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:57.099 [2024-12-08 06:07:19.996754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.001342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.001541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.099 [2024-12-08 06:07:20.001736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.001891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.002159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.002348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.099 [2024-12-08 06:07:20.002540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.002719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.002975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.003137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.099 [2024-12-08 06:07:20.003331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.003520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.003707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.003866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.099 [2024-12-08 06:07:20.004039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.004231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.012716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.012779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.099 [2024-12-08 06:07:20.012812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.012823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.019613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.019682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.099 [2024-12-08 06:07:20.019701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.019713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.019786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.019803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.099 [2024-12-08 06:07:20.019815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.019826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.019855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.019868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.099 [2024-12-08 06:07:20.019879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.019896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.019994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.020014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.099 [2024-12-08 06:07:20.020026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.020037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.020086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.020104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.099 [2024-12-08 06:07:20.020116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.020127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.020299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.020332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.099 [2024-12-08 06:07:20.020352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.020389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.020470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.099 [2024-12-08 06:07:20.020500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.099 [2024-12-08 06:07:20.020523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.099 [2024-12-08 06:07:20.020546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.099 [2024-12-08 06:07:20.020751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.961 ms, result 0 00:19:57.356 00:19:57.357 00:19:57.357 06:07:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:59.258 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:59.258 06:07:22 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:59.258 [2024-12-08 06:07:22.298446] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:59.258 [2024-12-08 06:07:22.298635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88154 ] 00:19:59.516 [2024-12-08 06:07:22.449456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.516 [2024-12-08 06:07:22.492605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.776 [2024-12-08 06:07:22.586681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.776 [2024-12-08 06:07:22.586790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.776 [2024-12-08 06:07:22.742176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.776 [2024-12-08 06:07:22.742435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.776 [2024-12-08 06:07:22.742494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:59.776 [2024-12-08 06:07:22.742519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.776 [2024-12-08 06:07:22.742613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.776 [2024-12-08 06:07:22.742634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.776 [2024-12-08 06:07:22.742646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:59.776 [2024-12-08 06:07:22.742668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.776 [2024-12-08 06:07:22.742717] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.776 [2024-12-08 06:07:22.742999] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.776 [2024-12-08 06:07:22.743029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.776 [2024-12-08 06:07:22.743040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.776 [2024-12-08 06:07:22.743054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:59.776 [2024-12-08 06:07:22.743066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.776 [2024-12-08 06:07:22.744326] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:59.776 [2024-12-08 06:07:22.746604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.776 [2024-12-08 06:07:22.746798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:59.776 [2024-12-08 06:07:22.746979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:19:59.776 [2024-12-08 06:07:22.747166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.776 [2024-12-08 06:07:22.747432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.776 [2024-12-08 06:07:22.747631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:59.776 [2024-12-08 06:07:22.747835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:59.777 [2024-12-08 06:07:22.748025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.752613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.752652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.777 [2024-12-08 06:07:22.752684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:19:59.777 [2024-12-08 06:07:22.752694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.752803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.752823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.777 [2024-12-08 06:07:22.752843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:59.777 [2024-12-08 06:07:22.752854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.752922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.752956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.777 [2024-12-08 06:07:22.752969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:59.777 [2024-12-08 06:07:22.752980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.753013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.777 [2024-12-08 06:07:22.754486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.754519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.777 [2024-12-08 06:07:22.754549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:19:59.777 [2024-12-08 06:07:22.754560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.754594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.754608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.777 [2024-12-08 06:07:22.754645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:59.777 [2024-12-08 06:07:22.754655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.754681] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:59.777 [2024-12-08 06:07:22.754711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:59.777 [2024-12-08 06:07:22.754764] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:59.777 [2024-12-08 06:07:22.754785] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:59.777 [2024-12-08 06:07:22.754880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:59.777 [2024-12-08 06:07:22.754894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.777 [2024-12-08 06:07:22.754906] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:59.777 [2024-12-08 06:07:22.754918] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.777 [2024-12-08 06:07:22.754934] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.777 [2024-12-08 06:07:22.754945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:59.777 [2024-12-08 06:07:22.754954] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.777 [2024-12-08 06:07:22.754971] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:59.777 [2024-12-08 06:07:22.754980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:59.777 [2024-12-08 06:07:22.754998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.755008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.777 [2024-12-08 06:07:22.755018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:59.777 [2024-12-08 06:07:22.755028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.755106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.777 [2024-12-08 06:07:22.755124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.777 [2024-12-08 06:07:22.755137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:59.777 [2024-12-08 06:07:22.755155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.777 [2024-12-08 06:07:22.755286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.777 [2024-12-08 06:07:22.755304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.777 [2024-12-08 06:07:22.755316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.777 [2024-12-08 06:07:22.755345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.777 [2024-12-08 06:07:22.755375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.777 [2024-12-08 06:07:22.755393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.777 [2024-12-08 06:07:22.755402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:59.777 [2024-12-08 06:07:22.755411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.777 [2024-12-08 06:07:22.755424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.777 [2024-12-08 06:07:22.755434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:59.777 [2024-12-08 06:07:22.755443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.777 [2024-12-08 06:07:22.755488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.777 [2024-12-08 06:07:22.755534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.777 [2024-12-08 06:07:22.755562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.777 [2024-12-08 06:07:22.755590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.777 [2024-12-08 06:07:22.755643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.777 [2024-12-08 06:07:22.755671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.777 [2024-12-08 06:07:22.755690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.777 [2024-12-08 06:07:22.755699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:59.777 [2024-12-08 06:07:22.755707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.777 [2024-12-08 06:07:22.755717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:59.777 [2024-12-08 06:07:22.755726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:59.777 [2024-12-08 06:07:22.755734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:59.777 [2024-12-08 06:07:22.755753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:59.777 [2024-12-08 06:07:22.755763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.777 [2024-12-08 06:07:22.755782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.777 [2024-12-08 06:07:22.755809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.777 [2024-12-08 06:07:22.755833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.777 [2024-12-08 06:07:22.755843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.777 [2024-12-08 06:07:22.755853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.777 [2024-12-08 06:07:22.755862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.777 [2024-12-08 06:07:22.755871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.777 [2024-12-08 06:07:22.755894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.777 [2024-12-08 06:07:22.755904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.777 [2024-12-08 06:07:22.755916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.777 [2024-12-08 06:07:22.755926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:59.777 [2024-12-08 06:07:22.755936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:59.777 [2024-12-08 06:07:22.755945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:59.777 [2024-12-08 06:07:22.755955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:59.778 [2024-12-08 06:07:22.755964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:59.778 [2024-12-08 06:07:22.755974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:59.778 [2024-12-08 06:07:22.755985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:59.778 [2024-12-08 06:07:22.755996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:59.778 [2024-12-08 06:07:22.756005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:59.778 [2024-12-08 06:07:22.756025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:59.778 [2024-12-08 06:07:22.756035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:59.778 [2024-12-08 06:07:22.756045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:59.778 [2024-12-08 06:07:22.756054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:59.778 [2024-12-08 06:07:22.756064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:59.778 [2024-12-08 06:07:22.756073] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.778 [2024-12-08 06:07:22.756084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.778 [2024-12-08 06:07:22.756095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.778 [2024-12-08 06:07:22.756104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.778 [2024-12-08 06:07:22.756114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.778 [2024-12-08 06:07:22.756123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.778 [2024-12-08 06:07:22.756134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.756144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.778 [2024-12-08 06:07:22.756164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:19:59.778 [2024-12-08 06:07:22.756174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.773094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.773384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.778 [2024-12-08 06:07:22.773598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.858 ms 00:19:59.778 [2024-12-08 06:07:22.773816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.774165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.774373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.778 [2024-12-08 06:07:22.774563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:59.778 [2024-12-08 06:07:22.774786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.784564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.784781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.778 [2024-12-08 06:07:22.785006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.453 ms 00:19:59.778 [2024-12-08 06:07:22.785267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.785541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.785755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.778 [2024-12-08 06:07:22.785954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:59.778 [2024-12-08 06:07:22.786174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.786827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.787019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.778 [2024-12-08 06:07:22.787169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:59.778 [2024-12-08 06:07:22.787383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.787632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.787667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.778 [2024-12-08 06:07:22.787683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:19:59.778 [2024-12-08 06:07:22.787695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.792401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.792438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.778 [2024-12-08 06:07:22.792482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:19:59.778 [2024-12-08 06:07:22.792500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.794790] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:59.778 [2024-12-08 06:07:22.794830] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:59.778 [2024-12-08 06:07:22.794863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.794878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:59.778 [2024-12-08 06:07:22.794890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:19:59.778 [2024-12-08 06:07:22.794899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.808840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.808894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:59.778 [2024-12-08 06:07:22.808935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.899 ms 00:19:59.778 [2024-12-08 06:07:22.808954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.810947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.810985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:59.778 [2024-12-08 06:07:22.811015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:19:59.778 [2024-12-08 06:07:22.811025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.812733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.812770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:59.778 [2024-12-08 06:07:22.812800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:19:59.778 [2024-12-08 06:07:22.812810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.778 [2024-12-08 06:07:22.813151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.778 [2024-12-08 06:07:22.813171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.778 [2024-12-08 06:07:22.813233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:19:59.778 [2024-12-08 06:07:22.813247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.829569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.829681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.038 [2024-12-08 06:07:22.829722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.296 ms 00:20:00.038 [2024-12-08 06:07:22.829745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.837380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:00.038 [2024-12-08 06:07:22.839659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.839699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.038 [2024-12-08 06:07:22.839715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.845 ms 00:20:00.038 [2024-12-08 06:07:22.839734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.839833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.839854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.038 [2024-12-08 06:07:22.839867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:00.038 [2024-12-08 06:07:22.839877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.840009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.840029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.038 [2024-12-08 06:07:22.840040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:00.038 [2024-12-08 06:07:22.840054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.840103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.840120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.038 [2024-12-08 06:07:22.840130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:00.038 [2024-12-08 06:07:22.840140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.840179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.038 [2024-12-08 06:07:22.840203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.840213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.038 [2024-12-08 06:07:22.840264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:00.038 [2024-12-08 06:07:22.840279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.843925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.843965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.038 [2024-12-08 06:07:22.843996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.619 ms 00:20:00.038 [2024-12-08 06:07:22.844007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.844082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.038 [2024-12-08 06:07:22.844100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.038 [2024-12-08 06:07:22.844121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:00.038 [2024-12-08 06:07:22.844131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.038 [2024-12-08 06:07:22.845609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.793 ms, result 0 00:20:00.976  [2024-12-08T06:07:24.958Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-08T06:07:25.898Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-08T06:07:26.863Z] Copying: 72/1024 [MB] (24 MBps) [2024-12-08T06:07:28.242Z] Copying: 96/1024 [MB] (24 MBps) [2024-12-08T06:07:29.180Z] Copying: 120/1024 [MB] (24 MBps) [2024-12-08T06:07:30.119Z] Copying: 145/1024 [MB] (24 MBps) [2024-12-08T06:07:31.056Z] Copying: 169/1024 [MB] (24 MBps) [2024-12-08T06:07:31.992Z] Copying: 194/1024 [MB] (24 MBps) [2024-12-08T06:07:32.930Z] Copying: 218/1024 [MB] (24 MBps) [2024-12-08T06:07:33.867Z] Copying: 242/1024 [MB] (23 MBps) [2024-12-08T06:07:35.246Z] Copying: 266/1024 [MB] (24 MBps) [2024-12-08T06:07:36.181Z] Copying: 291/1024 [MB] (24 MBps) [2024-12-08T06:07:37.117Z] Copying: 317/1024 [MB] (26 MBps) [2024-12-08T06:07:38.051Z] Copying: 342/1024 [MB] (25 MBps) [2024-12-08T06:07:38.986Z] Copying: 367/1024 [MB] (24 MBps) [2024-12-08T06:07:39.935Z] Copying: 390/1024 [MB] (23 MBps) [2024-12-08T06:07:40.879Z] Copying: 414/1024 [MB] (23 MBps) [2024-12-08T06:07:41.868Z] Copying: 437/1024 [MB] (23 MBps) [2024-12-08T06:07:43.245Z] Copying: 461/1024 [MB] (23 MBps) [2024-12-08T06:07:44.181Z] Copying: 484/1024 [MB] (22 MBps) [2024-12-08T06:07:45.120Z] Copying: 507/1024 [MB] (23 MBps) [2024-12-08T06:07:46.053Z] Copying: 530/1024 [MB] (23 MBps) [2024-12-08T06:07:46.989Z] Copying: 554/1024 [MB] (23 MBps) [2024-12-08T06:07:47.927Z] Copying: 578/1024 [MB] (23 MBps) [2024-12-08T06:07:48.864Z] Copying: 602/1024 [MB] (24 MBps) [2024-12-08T06:07:50.241Z] Copying: 625/1024 [MB] (23 MBps) [2024-12-08T06:07:51.179Z] Copying: 649/1024 [MB] (23 MBps) [2024-12-08T06:07:52.117Z] Copying: 673/1024 [MB] (23 MBps) [2024-12-08T06:07:53.055Z] Copying: 696/1024 [MB] (23 MBps) [2024-12-08T06:07:53.992Z] Copying: 720/1024 [MB] (23 MBps) [2024-12-08T06:07:54.930Z] Copying: 744/1024 [MB] (23 MBps) [2024-12-08T06:07:55.864Z] Copying: 768/1024 [MB] (24 MBps) [2024-12-08T06:07:57.266Z] Copying: 792/1024 [MB] (24 MBps) [2024-12-08T06:07:58.197Z] Copying: 815/1024 [MB] (23 MBps) [2024-12-08T06:07:59.132Z] Copying: 838/1024 [MB] (22 MBps) [2024-12-08T06:08:00.068Z] Copying: 861/1024 [MB] (22 MBps) [2024-12-08T06:08:01.029Z] Copying: 884/1024 [MB] (23 MBps) [2024-12-08T06:08:01.965Z] Copying: 907/1024 [MB] (23 MBps) [2024-12-08T06:08:02.901Z] Copying: 930/1024 [MB] (22 MBps) [2024-12-08T06:08:04.280Z] Copying: 953/1024 [MB] (22 MBps) [2024-12-08T06:08:05.216Z] Copying: 976/1024 [MB] (23 MBps) [2024-12-08T06:08:06.149Z] Copying: 1000/1024 [MB] (23 MBps) [2024-12-08T06:08:07.085Z] Copying: 1023/1024 [MB] (23 MBps) [2024-12-08T06:08:07.085Z] Copying: 1048508/1048576 [kB] (888 kBps) [2024-12-08T06:08:07.085Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:08:06.949807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:06.949915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.040 [2024-12-08 06:08:06.949953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:44.040 [2024-12-08 06:08:06.949965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.040 [2024-12-08 06:08:06.951358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.040 [2024-12-08 06:08:06.954583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:06.954823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.040 [2024-12-08 06:08:06.954865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:20:44.040 [2024-12-08 06:08:06.954878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.040 [2024-12-08 06:08:06.968478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:06.968677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.040 [2024-12-08 06:08:06.968722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.174 ms 00:20:44.040 [2024-12-08 06:08:06.968735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.040 [2024-12-08 06:08:06.989650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:06.989691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.040 [2024-12-08 06:08:06.989723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.888 ms 00:20:44.040 [2024-12-08 06:08:06.989741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.040 [2024-12-08 06:08:06.995805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:06.995853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:44.040 [2024-12-08 06:08:06.995883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.026 ms 00:20:44.040 [2024-12-08 06:08:06.995921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.040 [2024-12-08 06:08:06.997353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:06.997391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.040 [2024-12-08 06:08:06.997422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:20:44.040 [2024-12-08 06:08:06.997432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.040 [2024-12-08 06:08:07.000477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.040 [2024-12-08 06:08:07.000700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.040 [2024-12-08 06:08:07.000742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:20:44.040 [2024-12-08 06:08:07.000754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.301 [2024-12-08 06:08:07.107548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.301 [2024-12-08 06:08:07.107638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.301 [2024-12-08 06:08:07.107661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.746 ms 00:20:44.301 [2024-12-08 06:08:07.107673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.301 [2024-12-08 06:08:07.109576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.301 [2024-12-08 06:08:07.109765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:44.301 [2024-12-08 06:08:07.109791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:20:44.301 [2024-12-08 06:08:07.109805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.301 [2024-12-08 06:08:07.111166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.301 [2024-12-08 06:08:07.111217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:44.301 [2024-12-08 06:08:07.111234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:20:44.301 [2024-12-08 06:08:07.111244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.301 [2024-12-08 06:08:07.112536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.301 [2024-12-08 06:08:07.112573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.301 [2024-12-08 06:08:07.112588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:20:44.301 [2024-12-08 06:08:07.112597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.301 [2024-12-08 06:08:07.113726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.301 [2024-12-08 06:08:07.113763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.301 [2024-12-08 06:08:07.113777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:20:44.301 [2024-12-08 06:08:07.113787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.301 [2024-12-08 06:08:07.113820] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.301 [2024-12-08 06:08:07.113842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115712 / 261120 wr_cnt: 1 state: open 00:20:44.301 [2024-12-08 06:08:07.113855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.113994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.301 [2024-12-08 06:08:07.114397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.302 [2024-12-08 06:08:07.114963] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.302 [2024-12-08 06:08:07.114976] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4 00:20:44.302 [2024-12-08 06:08:07.114987] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115712 00:20:44.302 [2024-12-08 06:08:07.114997] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116672 00:20:44.302 [2024-12-08 06:08:07.115006] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115712 00:20:44.302 [2024-12-08 06:08:07.115023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:20:44.302 [2024-12-08 06:08:07.115037] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.302 [2024-12-08 06:08:07.115047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.302 [2024-12-08 06:08:07.115057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.302 [2024-12-08 06:08:07.115066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.302 [2024-12-08 06:08:07.115075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.302 [2024-12-08 06:08:07.115086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.302 [2024-12-08 06:08:07.115106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.302 [2024-12-08 06:08:07.115117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:20:44.302 [2024-12-08 06:08:07.115127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.302 [2024-12-08 06:08:07.116958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.302 [2024-12-08 06:08:07.117024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.302 [2024-12-08 06:08:07.117076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:20:44.302 [2024-12-08 06:08:07.117114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.302 [2024-12-08 06:08:07.117312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.302 [2024-12-08 06:08:07.117459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.302 [2024-12-08 06:08:07.117565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:44.303 [2024-12-08 06:08:07.117690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.122126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.122331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.303 [2024-12-08 06:08:07.122442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.122558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.122684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.122779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.303 [2024-12-08 06:08:07.122876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.122923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.123098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.123261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.303 [2024-12-08 06:08:07.123370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.123505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.123577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.123760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.303 [2024-12-08 06:08:07.123830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.123868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.132118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.132433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.303 [2024-12-08 06:08:07.132564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.132638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.139096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.139323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.303 [2024-12-08 06:08:07.139367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.139381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.139486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.139505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.303 [2024-12-08 06:08:07.139525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.139546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.139580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.139595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.303 [2024-12-08 06:08:07.139608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.139619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.139718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.139738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.303 [2024-12-08 06:08:07.139751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.139784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.139846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.139864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:44.303 [2024-12-08 06:08:07.139875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.139901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.139969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.139996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.303 [2024-12-08 06:08:07.140007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.140022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.140094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.303 [2024-12-08 06:08:07.140113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.303 [2024-12-08 06:08:07.140124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.303 [2024-12-08 06:08:07.140134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.303 [2024-12-08 06:08:07.140597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 193.046 ms, result 0 00:20:44.872 00:20:44.872 00:20:44.872 06:08:07 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:45.132 [2024-12-08 06:08:07.959766] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:45.132 [2024-12-08 06:08:07.959966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88613 ] 00:20:45.132 [2024-12-08 06:08:08.107623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.132 [2024-12-08 06:08:08.141898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.392 [2024-12-08 06:08:08.225757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.392 [2024-12-08 06:08:08.225856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.392 [2024-12-08 06:08:08.384376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.384451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.392 [2024-12-08 06:08:08.384475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:45.392 [2024-12-08 06:08:08.384486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.384580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.384599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.392 [2024-12-08 06:08:08.384610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:45.392 [2024-12-08 06:08:08.384633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.384663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.392 [2024-12-08 06:08:08.385002] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.392 [2024-12-08 06:08:08.385033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.385053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.392 [2024-12-08 06:08:08.385064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:20:45.392 [2024-12-08 06:08:08.385074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.386752] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:45.392 [2024-12-08 06:08:08.389410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.389613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:45.392 [2024-12-08 06:08:08.389743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:20:45.392 [2024-12-08 06:08:08.389793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.389920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.390126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:45.392 [2024-12-08 06:08:08.390196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:45.392 [2024-12-08 06:08:08.390372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.395279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.395347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.392 [2024-12-08 06:08:08.395364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:20:45.392 [2024-12-08 06:08:08.395375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.395595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.395620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.392 [2024-12-08 06:08:08.395633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:20:45.392 [2024-12-08 06:08:08.395643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.395729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-12-08 06:08:08.395745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.392 [2024-12-08 06:08:08.395772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:45.392 [2024-12-08 06:08:08.395782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-12-08 06:08:08.395839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.392 [2024-12-08 06:08:08.397424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.393 [2024-12-08 06:08:08.397462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.393 [2024-12-08 06:08:08.397477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:20:45.393 [2024-12-08 06:08:08.397487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.393 [2024-12-08 06:08:08.397528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.393 [2024-12-08 06:08:08.397542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.393 [2024-12-08 06:08:08.397553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:45.393 [2024-12-08 06:08:08.397562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.393 [2024-12-08 06:08:08.397600] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:45.393 [2024-12-08 06:08:08.397640] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:45.393 [2024-12-08 06:08:08.397687] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:45.393 [2024-12-08 06:08:08.397720] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:45.393 [2024-12-08 06:08:08.397821] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.393 [2024-12-08 06:08:08.397836] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.393 [2024-12-08 06:08:08.397849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:45.393 [2024-12-08 06:08:08.397861] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.393 [2024-12-08 06:08:08.397878] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.393 [2024-12-08 06:08:08.397894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:45.393 [2024-12-08 06:08:08.397903] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.393 [2024-12-08 06:08:08.397913] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.393 [2024-12-08 06:08:08.397922] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.393 [2024-12-08 06:08:08.397933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.393 [2024-12-08 06:08:08.397943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.393 [2024-12-08 06:08:08.397962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:20:45.393 [2024-12-08 06:08:08.397972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.393 [2024-12-08 06:08:08.398056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.393 [2024-12-08 06:08:08.398069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.393 [2024-12-08 06:08:08.398086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:45.393 [2024-12-08 06:08:08.398096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.393 [2024-12-08 06:08:08.398211] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.393 [2024-12-08 06:08:08.398230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.393 [2024-12-08 06:08:08.398252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.393 [2024-12-08 06:08:08.398285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.393 [2024-12-08 06:08:08.398313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.393 [2024-12-08 06:08:08.398331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.393 [2024-12-08 06:08:08.398340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:45.393 [2024-12-08 06:08:08.398349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.393 [2024-12-08 06:08:08.398357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.393 [2024-12-08 06:08:08.398366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:45.393 [2024-12-08 06:08:08.398375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.393 [2024-12-08 06:08:08.398395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.393 [2024-12-08 06:08:08.398426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.393 [2024-12-08 06:08:08.398453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.393 [2024-12-08 06:08:08.398479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.393 [2024-12-08 06:08:08.398505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.393 [2024-12-08 06:08:08.398532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.393 [2024-12-08 06:08:08.398549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.393 [2024-12-08 06:08:08.398560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:45.393 [2024-12-08 06:08:08.398570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.393 [2024-12-08 06:08:08.398579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.393 [2024-12-08 06:08:08.398589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:45.393 [2024-12-08 06:08:08.398598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.393 [2024-12-08 06:08:08.398616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:45.393 [2024-12-08 06:08:08.398626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398635] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.393 [2024-12-08 06:08:08.398645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.393 [2024-12-08 06:08:08.398654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.393 [2024-12-08 06:08:08.398676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.393 [2024-12-08 06:08:08.398687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.393 [2024-12-08 06:08:08.398697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.393 [2024-12-08 06:08:08.398706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.394 [2024-12-08 06:08:08.398715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.394 [2024-12-08 06:08:08.398726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.394 [2024-12-08 06:08:08.398736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.394 [2024-12-08 06:08:08.398747] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.394 [2024-12-08 06:08:08.398759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:45.394 [2024-12-08 06:08:08.398780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:45.394 [2024-12-08 06:08:08.398789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:45.394 [2024-12-08 06:08:08.398799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:45.394 [2024-12-08 06:08:08.398808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:45.394 [2024-12-08 06:08:08.398818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:45.394 [2024-12-08 06:08:08.398827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:45.394 [2024-12-08 06:08:08.398837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:45.394 [2024-12-08 06:08:08.398846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:45.394 [2024-12-08 06:08:08.398866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:45.394 [2024-12-08 06:08:08.398919] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.394 [2024-12-08 06:08:08.398930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.394 [2024-12-08 06:08:08.398951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.394 [2024-12-08 06:08:08.398961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.394 [2024-12-08 06:08:08.398970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.394 [2024-12-08 06:08:08.398982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.398992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.394 [2024-12-08 06:08:08.399010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:20:45.394 [2024-12-08 06:08:08.399027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.417160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.417232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.394 [2024-12-08 06:08:08.417268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.075 ms 00:20:45.394 [2024-12-08 06:08:08.417292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.417407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.417422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:45.394 [2024-12-08 06:08:08.417433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:45.394 [2024-12-08 06:08:08.417443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.425464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.425524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.394 [2024-12-08 06:08:08.425557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.927 ms 00:20:45.394 [2024-12-08 06:08:08.425567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.425628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.425643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.394 [2024-12-08 06:08:08.425654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.394 [2024-12-08 06:08:08.425665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.425996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.426018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.394 [2024-12-08 06:08:08.426029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:20:45.394 [2024-12-08 06:08:08.426050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.426216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.426243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.394 [2024-12-08 06:08:08.426254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:45.394 [2024-12-08 06:08:08.426279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.431122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.431227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.394 [2024-12-08 06:08:08.431251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.796 ms 00:20:45.394 [2024-12-08 06:08:08.431262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-12-08 06:08:08.433989] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:45.394 [2024-12-08 06:08:08.434032] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:45.394 [2024-12-08 06:08:08.434067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-12-08 06:08:08.434078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:45.394 [2024-12-08 06:08:08.434089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:20:45.394 [2024-12-08 06:08:08.434099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.653 [2024-12-08 06:08:08.449154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.653 [2024-12-08 06:08:08.449228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:45.653 [2024-12-08 06:08:08.449273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.007 ms 00:20:45.653 [2024-12-08 06:08:08.449284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.653 [2024-12-08 06:08:08.451438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.653 [2024-12-08 06:08:08.451521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:45.653 [2024-12-08 06:08:08.451538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:20:45.653 [2024-12-08 06:08:08.451555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.653 [2024-12-08 06:08:08.453274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.653 [2024-12-08 06:08:08.453485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:45.654 [2024-12-08 06:08:08.453512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:20:45.654 [2024-12-08 06:08:08.453523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.453936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.453966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:45.654 [2024-12-08 06:08:08.453981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:20:45.654 [2024-12-08 06:08:08.453991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.470397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.470725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:45.654 [2024-12-08 06:08:08.470763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:20:45.654 [2024-12-08 06:08:08.470781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.478664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:45.654 [2024-12-08 06:08:08.481200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.481277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:45.654 [2024-12-08 06:08:08.481293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.359 ms 00:20:45.654 [2024-12-08 06:08:08.481325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.481398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.481417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:45.654 [2024-12-08 06:08:08.481430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:45.654 [2024-12-08 06:08:08.481441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.483156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.483230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:45.654 [2024-12-08 06:08:08.483245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:20:45.654 [2024-12-08 06:08:08.483259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.483303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.483329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:45.654 [2024-12-08 06:08:08.483341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:45.654 [2024-12-08 06:08:08.483351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.483395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:45.654 [2024-12-08 06:08:08.483412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.483422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:45.654 [2024-12-08 06:08:08.483437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:45.654 [2024-12-08 06:08:08.483447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.487233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.487288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:45.654 [2024-12-08 06:08:08.487306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:20:45.654 [2024-12-08 06:08:08.487317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.487406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.654 [2024-12-08 06:08:08.487470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:45.654 [2024-12-08 06:08:08.487486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:45.654 [2024-12-08 06:08:08.487497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.654 [2024-12-08 06:08:08.488735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.756 ms, result 0 00:20:47.032  [2024-12-08T06:08:11.014Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-08T06:08:11.953Z] Copying: 44/1024 [MB] (24 MBps) [2024-12-08T06:08:12.890Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-08T06:08:13.827Z] Copying: 90/1024 [MB] (23 MBps) [2024-12-08T06:08:14.762Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-08T06:08:15.724Z] Copying: 137/1024 [MB] (23 MBps) [2024-12-08T06:08:17.096Z] Copying: 160/1024 [MB] (23 MBps) [2024-12-08T06:08:18.030Z] Copying: 184/1024 [MB] (23 MBps) [2024-12-08T06:08:18.965Z] Copying: 207/1024 [MB] (23 MBps) [2024-12-08T06:08:19.901Z] Copying: 231/1024 [MB] (23 MBps) [2024-12-08T06:08:20.837Z] Copying: 254/1024 [MB] (23 MBps) [2024-12-08T06:08:21.773Z] Copying: 278/1024 [MB] (23 MBps) [2024-12-08T06:08:22.709Z] Copying: 301/1024 [MB] (23 MBps) [2024-12-08T06:08:24.096Z] Copying: 325/1024 [MB] (23 MBps) [2024-12-08T06:08:25.035Z] Copying: 348/1024 [MB] (23 MBps) [2024-12-08T06:08:25.969Z] Copying: 372/1024 [MB] (23 MBps) [2024-12-08T06:08:26.908Z] Copying: 395/1024 [MB] (23 MBps) [2024-12-08T06:08:27.845Z] Copying: 418/1024 [MB] (22 MBps) [2024-12-08T06:08:28.785Z] Copying: 440/1024 [MB] (22 MBps) [2024-12-08T06:08:29.721Z] Copying: 463/1024 [MB] (22 MBps) [2024-12-08T06:08:31.117Z] Copying: 485/1024 [MB] (22 MBps) [2024-12-08T06:08:32.056Z] Copying: 508/1024 [MB] (22 MBps) [2024-12-08T06:08:32.991Z] Copying: 531/1024 [MB] (22 MBps) [2024-12-08T06:08:33.927Z] Copying: 554/1024 [MB] (23 MBps) [2024-12-08T06:08:34.863Z] Copying: 578/1024 [MB] (23 MBps) [2024-12-08T06:08:35.796Z] Copying: 600/1024 [MB] (22 MBps) [2024-12-08T06:08:36.731Z] Copying: 623/1024 [MB] (22 MBps) [2024-12-08T06:08:38.106Z] Copying: 646/1024 [MB] (23 MBps) [2024-12-08T06:08:39.040Z] Copying: 669/1024 [MB] (22 MBps) [2024-12-08T06:08:39.987Z] Copying: 691/1024 [MB] (22 MBps) [2024-12-08T06:08:40.922Z] Copying: 714/1024 [MB] (23 MBps) [2024-12-08T06:08:41.859Z] Copying: 737/1024 [MB] (22 MBps) [2024-12-08T06:08:42.796Z] Copying: 760/1024 [MB] (22 MBps) [2024-12-08T06:08:43.732Z] Copying: 784/1024 [MB] (23 MBps) [2024-12-08T06:08:45.110Z] Copying: 806/1024 [MB] (22 MBps) [2024-12-08T06:08:45.718Z] Copying: 829/1024 [MB] (22 MBps) [2024-12-08T06:08:47.091Z] Copying: 852/1024 [MB] (23 MBps) [2024-12-08T06:08:48.026Z] Copying: 875/1024 [MB] (22 MBps) [2024-12-08T06:08:48.962Z] Copying: 898/1024 [MB] (22 MBps) [2024-12-08T06:08:49.897Z] Copying: 920/1024 [MB] (22 MBps) [2024-12-08T06:08:50.834Z] Copying: 942/1024 [MB] (22 MBps) [2024-12-08T06:08:51.772Z] Copying: 966/1024 [MB] (23 MBps) [2024-12-08T06:08:52.712Z] Copying: 989/1024 [MB] (22 MBps) [2024-12-08T06:08:53.281Z] Copying: 1011/1024 [MB] (22 MBps) [2024-12-08T06:08:53.850Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:08:53.545090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.545488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:30.805 [2024-12-08 06:08:53.545528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:30.805 [2024-12-08 06:08:53.545546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.545608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:30.805 [2024-12-08 06:08:53.546166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.546210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:30.805 [2024-12-08 06:08:53.546228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:21:30.805 [2024-12-08 06:08:53.546257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.546581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.546604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:30.805 [2024-12-08 06:08:53.546620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:21:30.805 [2024-12-08 06:08:53.546634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.552877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.553133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:30.805 [2024-12-08 06:08:53.553163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.217 ms 00:21:30.805 [2024-12-08 06:08:53.553176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.559713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.559950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:30.805 [2024-12-08 06:08:53.559993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.476 ms 00:21:30.805 [2024-12-08 06:08:53.560007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.561348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.561386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:30.805 [2024-12-08 06:08:53.561417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.265 ms 00:21:30.805 [2024-12-08 06:08:53.561428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.564508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.564579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:30.805 [2024-12-08 06:08:53.564610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:21:30.805 [2024-12-08 06:08:53.564621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.685599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.685672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:30.805 [2024-12-08 06:08:53.685693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.938 ms 00:21:30.805 [2024-12-08 06:08:53.685704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.687533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.687574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:30.805 [2024-12-08 06:08:53.687591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.806 ms 00:21:30.805 [2024-12-08 06:08:53.687602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.689116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.689308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:30.805 [2024-12-08 06:08:53.689352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:21:30.805 [2024-12-08 06:08:53.689365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.690629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.690668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:30.805 [2024-12-08 06:08:53.690682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:21:30.805 [2024-12-08 06:08:53.690692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.691861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.805 [2024-12-08 06:08:53.691902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:30.805 [2024-12-08 06:08:53.691916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:21:30.805 [2024-12-08 06:08:53.691925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.805 [2024-12-08 06:08:53.691961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:30.805 [2024-12-08 06:08:53.691982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:30.805 [2024-12-08 06:08:53.691996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:30.805 [2024-12-08 06:08:53.692226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.692996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.693014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.693025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.693035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.693045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.693056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:30.806 [2024-12-08 06:08:53.693074] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:30.806 [2024-12-08 06:08:53.693084] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25ee0777-a5f2-4ffa-adf2-aa0d3705dbe4 00:21:30.806 [2024-12-08 06:08:53.693095] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:30.806 [2024-12-08 06:08:53.693104] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16320 00:21:30.806 [2024-12-08 06:08:53.693114] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15360 00:21:30.806 [2024-12-08 06:08:53.693132] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:21:30.806 [2024-12-08 06:08:53.693145] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:30.806 [2024-12-08 06:08:53.693155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:30.806 [2024-12-08 06:08:53.693165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:30.806 [2024-12-08 06:08:53.693174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:30.806 [2024-12-08 06:08:53.693549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:30.806 [2024-12-08 06:08:53.693596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.806 [2024-12-08 06:08:53.693632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:30.806 [2024-12-08 06:08:53.693669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:21:30.806 [2024-12-08 06:08:53.693912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.806 [2024-12-08 06:08:53.695365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.806 [2024-12-08 06:08:53.695538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:30.806 [2024-12-08 06:08:53.695660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:21:30.806 [2024-12-08 06:08:53.695776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.806 [2024-12-08 06:08:53.695928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.807 [2024-12-08 06:08:53.696034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:30.807 [2024-12-08 06:08:53.696148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:30.807 [2024-12-08 06:08:53.696252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.700884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.701078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.807 [2024-12-08 06:08:53.701102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.701114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.701184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.701199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.807 [2024-12-08 06:08:53.701225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.701239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.701324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.701348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.807 [2024-12-08 06:08:53.701368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.701386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.701407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.701436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.807 [2024-12-08 06:08:53.701446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.701455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.709935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.710241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:30.807 [2024-12-08 06:08:53.710360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.710410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.717198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.717427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.807 [2024-12-08 06:08:53.717569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.717633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.717753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.717809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.807 [2024-12-08 06:08:53.717969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.718021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.718229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.718256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.807 [2024-12-08 06:08:53.718269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.718280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.718371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.718390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.807 [2024-12-08 06:08:53.718402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.718433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.718480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.718498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:30.807 [2024-12-08 06:08:53.718509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.718518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.718560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.718575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.807 [2024-12-08 06:08:53.718585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.718601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.718650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.807 [2024-12-08 06:08:53.718666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.807 [2024-12-08 06:08:53.718676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.807 [2024-12-08 06:08:53.718687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.807 [2024-12-08 06:08:53.718831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 173.722 ms, result 0 00:21:31.066 00:21:31.066 00:21:31.066 06:08:53 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:32.964 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:32.964 06:08:55 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:32.964 06:08:55 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:32.964 06:08:55 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87007 00:21:33.223 06:08:56 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87007 ']' 00:21:33.223 06:08:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87007 00:21:33.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87007) - No such process 00:21:33.223 Process with pid 87007 is not found 00:21:33.223 06:08:56 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 87007 is not found' 00:21:33.223 Remove shared memory files 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:33.223 06:08:56 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:33.223 ************************************ 00:21:33.223 END TEST ftl_restore 00:21:33.223 ************************************ 00:21:33.223 00:21:33.223 real 3m23.069s 00:21:33.223 user 3m9.278s 00:21:33.223 sys 0m15.370s 00:21:33.223 06:08:56 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:33.223 06:08:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:33.223 06:08:56 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:33.223 06:08:56 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:33.223 06:08:56 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.223 06:08:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:33.223 ************************************ 00:21:33.223 START TEST ftl_dirty_shutdown 00:21:33.223 ************************************ 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:33.223 * Looking for test storage... 00:21:33.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.223 --rc genhtml_branch_coverage=1 00:21:33.223 --rc genhtml_function_coverage=1 00:21:33.223 --rc genhtml_legend=1 00:21:33.223 --rc geninfo_all_blocks=1 00:21:33.223 --rc geninfo_unexecuted_blocks=1 00:21:33.223 00:21:33.223 ' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.223 --rc genhtml_branch_coverage=1 00:21:33.223 --rc genhtml_function_coverage=1 00:21:33.223 --rc genhtml_legend=1 00:21:33.223 --rc geninfo_all_blocks=1 00:21:33.223 --rc geninfo_unexecuted_blocks=1 00:21:33.223 00:21:33.223 ' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.223 --rc genhtml_branch_coverage=1 00:21:33.223 --rc genhtml_function_coverage=1 00:21:33.223 --rc genhtml_legend=1 00:21:33.223 --rc geninfo_all_blocks=1 00:21:33.223 --rc geninfo_unexecuted_blocks=1 00:21:33.223 00:21:33.223 ' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:33.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.223 --rc genhtml_branch_coverage=1 00:21:33.223 --rc genhtml_function_coverage=1 00:21:33.223 --rc genhtml_legend=1 00:21:33.223 --rc geninfo_all_blocks=1 00:21:33.223 --rc geninfo_unexecuted_blocks=1 00:21:33.223 00:21:33.223 ' 00:21:33.223 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89163 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89163 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89163 ']' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.482 06:08:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 [2024-12-08 06:08:56.383382] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:33.482 [2024-12-08 06:08:56.383784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89163 ] 00:21:33.741 [2024-12-08 06:08:56.530251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.741 [2024-12-08 06:08:56.573906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:34.306 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:34.871 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:35.129 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:35.129 { 00:21:35.129 "name": "nvme0n1", 00:21:35.129 "aliases": [ 00:21:35.129 "d0a027aa-b59d-40bd-925a-6d52ea3e46de" 00:21:35.129 ], 00:21:35.129 "product_name": "NVMe disk", 00:21:35.129 "block_size": 4096, 00:21:35.129 "num_blocks": 1310720, 00:21:35.129 "uuid": "d0a027aa-b59d-40bd-925a-6d52ea3e46de", 00:21:35.129 "numa_id": -1, 00:21:35.129 "assigned_rate_limits": { 00:21:35.129 "rw_ios_per_sec": 0, 00:21:35.129 "rw_mbytes_per_sec": 0, 00:21:35.129 "r_mbytes_per_sec": 0, 00:21:35.129 "w_mbytes_per_sec": 0 00:21:35.129 }, 00:21:35.129 "claimed": true, 00:21:35.129 "claim_type": "read_many_write_one", 00:21:35.129 "zoned": false, 00:21:35.129 "supported_io_types": { 00:21:35.129 "read": true, 00:21:35.129 "write": true, 00:21:35.129 "unmap": true, 00:21:35.129 "flush": true, 00:21:35.129 "reset": true, 00:21:35.130 "nvme_admin": true, 00:21:35.130 "nvme_io": true, 00:21:35.130 "nvme_io_md": false, 00:21:35.130 "write_zeroes": true, 00:21:35.130 "zcopy": false, 00:21:35.130 "get_zone_info": false, 00:21:35.130 "zone_management": false, 00:21:35.130 "zone_append": false, 00:21:35.130 "compare": true, 00:21:35.130 "compare_and_write": false, 00:21:35.130 "abort": true, 00:21:35.130 "seek_hole": false, 00:21:35.130 "seek_data": false, 00:21:35.130 "copy": true, 00:21:35.130 "nvme_iov_md": false 00:21:35.130 }, 00:21:35.130 "driver_specific": { 00:21:35.130 "nvme": [ 00:21:35.130 { 00:21:35.130 "pci_address": "0000:00:11.0", 00:21:35.130 "trid": { 00:21:35.130 "trtype": "PCIe", 00:21:35.130 "traddr": "0000:00:11.0" 00:21:35.130 }, 00:21:35.130 "ctrlr_data": { 00:21:35.130 "cntlid": 0, 00:21:35.130 "vendor_id": "0x1b36", 00:21:35.130 "model_number": "QEMU NVMe Ctrl", 00:21:35.130 "serial_number": "12341", 00:21:35.130 "firmware_revision": "8.0.0", 00:21:35.130 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:35.130 "oacs": { 00:21:35.130 "security": 0, 00:21:35.130 "format": 1, 00:21:35.130 "firmware": 0, 00:21:35.130 "ns_manage": 1 00:21:35.130 }, 00:21:35.130 "multi_ctrlr": false, 00:21:35.130 "ana_reporting": false 00:21:35.130 }, 00:21:35.130 "vs": { 00:21:35.130 "nvme_version": "1.4" 00:21:35.130 }, 00:21:35.130 "ns_data": { 00:21:35.130 "id": 1, 00:21:35.130 "can_share": false 00:21:35.130 } 00:21:35.130 } 00:21:35.130 ], 00:21:35.130 "mp_policy": "active_passive" 00:21:35.130 } 00:21:35.130 } 00:21:35.130 ]' 00:21:35.130 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:35.130 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:35.130 06:08:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:35.130 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:35.389 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=11a79ce9-919f-4f2f-aa96-0f0f91450447 00:21:35.389 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:35.389 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11a79ce9-919f-4f2f-aa96-0f0f91450447 00:21:35.648 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:35.907 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=c8772269-5a72-4d12-a3f9-325c0f2d36a4 00:21:35.907 06:08:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c8772269-5a72-4d12-a3f9-325c0f2d36a4 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:36.166 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:36.426 { 00:21:36.426 "name": "20536c9f-5d2e-40f3-a85b-c0ad6cf024e3", 00:21:36.426 "aliases": [ 00:21:36.426 "lvs/nvme0n1p0" 00:21:36.426 ], 00:21:36.426 "product_name": "Logical Volume", 00:21:36.426 "block_size": 4096, 00:21:36.426 "num_blocks": 26476544, 00:21:36.426 "uuid": "20536c9f-5d2e-40f3-a85b-c0ad6cf024e3", 00:21:36.426 "assigned_rate_limits": { 00:21:36.426 "rw_ios_per_sec": 0, 00:21:36.426 "rw_mbytes_per_sec": 0, 00:21:36.426 "r_mbytes_per_sec": 0, 00:21:36.426 "w_mbytes_per_sec": 0 00:21:36.426 }, 00:21:36.426 "claimed": false, 00:21:36.426 "zoned": false, 00:21:36.426 "supported_io_types": { 00:21:36.426 "read": true, 00:21:36.426 "write": true, 00:21:36.426 "unmap": true, 00:21:36.426 "flush": false, 00:21:36.426 "reset": true, 00:21:36.426 "nvme_admin": false, 00:21:36.426 "nvme_io": false, 00:21:36.426 "nvme_io_md": false, 00:21:36.426 "write_zeroes": true, 00:21:36.426 "zcopy": false, 00:21:36.426 "get_zone_info": false, 00:21:36.426 "zone_management": false, 00:21:36.426 "zone_append": false, 00:21:36.426 "compare": false, 00:21:36.426 "compare_and_write": false, 00:21:36.426 "abort": false, 00:21:36.426 "seek_hole": true, 00:21:36.426 "seek_data": true, 00:21:36.426 "copy": false, 00:21:36.426 "nvme_iov_md": false 00:21:36.426 }, 00:21:36.426 "driver_specific": { 00:21:36.426 "lvol": { 00:21:36.426 "lvol_store_uuid": "c8772269-5a72-4d12-a3f9-325c0f2d36a4", 00:21:36.426 "base_bdev": "nvme0n1", 00:21:36.426 "thin_provision": true, 00:21:36.426 "num_allocated_clusters": 0, 00:21:36.426 "snapshot": false, 00:21:36.426 "clone": false, 00:21:36.426 "esnap_clone": false 00:21:36.426 } 00:21:36.426 } 00:21:36.426 } 00:21:36.426 ]' 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:36.426 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:37.028 06:08:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:37.288 { 00:21:37.288 "name": "20536c9f-5d2e-40f3-a85b-c0ad6cf024e3", 00:21:37.288 "aliases": [ 00:21:37.288 "lvs/nvme0n1p0" 00:21:37.288 ], 00:21:37.288 "product_name": "Logical Volume", 00:21:37.288 "block_size": 4096, 00:21:37.288 "num_blocks": 26476544, 00:21:37.288 "uuid": "20536c9f-5d2e-40f3-a85b-c0ad6cf024e3", 00:21:37.288 "assigned_rate_limits": { 00:21:37.288 "rw_ios_per_sec": 0, 00:21:37.288 "rw_mbytes_per_sec": 0, 00:21:37.288 "r_mbytes_per_sec": 0, 00:21:37.288 "w_mbytes_per_sec": 0 00:21:37.288 }, 00:21:37.288 "claimed": false, 00:21:37.288 "zoned": false, 00:21:37.288 "supported_io_types": { 00:21:37.288 "read": true, 00:21:37.288 "write": true, 00:21:37.288 "unmap": true, 00:21:37.288 "flush": false, 00:21:37.288 "reset": true, 00:21:37.288 "nvme_admin": false, 00:21:37.288 "nvme_io": false, 00:21:37.288 "nvme_io_md": false, 00:21:37.288 "write_zeroes": true, 00:21:37.288 "zcopy": false, 00:21:37.288 "get_zone_info": false, 00:21:37.288 "zone_management": false, 00:21:37.288 "zone_append": false, 00:21:37.288 "compare": false, 00:21:37.288 "compare_and_write": false, 00:21:37.288 "abort": false, 00:21:37.288 "seek_hole": true, 00:21:37.288 "seek_data": true, 00:21:37.288 "copy": false, 00:21:37.288 "nvme_iov_md": false 00:21:37.288 }, 00:21:37.288 "driver_specific": { 00:21:37.288 "lvol": { 00:21:37.288 "lvol_store_uuid": "c8772269-5a72-4d12-a3f9-325c0f2d36a4", 00:21:37.288 "base_bdev": "nvme0n1", 00:21:37.288 "thin_provision": true, 00:21:37.288 "num_allocated_clusters": 0, 00:21:37.288 "snapshot": false, 00:21:37.288 "clone": false, 00:21:37.288 "esnap_clone": false 00:21:37.288 } 00:21:37.288 } 00:21:37.288 } 00:21:37.288 ]' 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:37.288 06:09:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:37.547 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 00:21:37.806 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:37.806 { 00:21:37.806 "name": "20536c9f-5d2e-40f3-a85b-c0ad6cf024e3", 00:21:37.806 "aliases": [ 00:21:37.806 "lvs/nvme0n1p0" 00:21:37.806 ], 00:21:37.806 "product_name": "Logical Volume", 00:21:37.806 "block_size": 4096, 00:21:37.806 "num_blocks": 26476544, 00:21:37.806 "uuid": "20536c9f-5d2e-40f3-a85b-c0ad6cf024e3", 00:21:37.806 "assigned_rate_limits": { 00:21:37.806 "rw_ios_per_sec": 0, 00:21:37.806 "rw_mbytes_per_sec": 0, 00:21:37.806 "r_mbytes_per_sec": 0, 00:21:37.806 "w_mbytes_per_sec": 0 00:21:37.806 }, 00:21:37.806 "claimed": false, 00:21:37.806 "zoned": false, 00:21:37.806 "supported_io_types": { 00:21:37.806 "read": true, 00:21:37.806 "write": true, 00:21:37.806 "unmap": true, 00:21:37.806 "flush": false, 00:21:37.807 "reset": true, 00:21:37.807 "nvme_admin": false, 00:21:37.807 "nvme_io": false, 00:21:37.807 "nvme_io_md": false, 00:21:37.807 "write_zeroes": true, 00:21:37.807 "zcopy": false, 00:21:37.807 "get_zone_info": false, 00:21:37.807 "zone_management": false, 00:21:37.807 "zone_append": false, 00:21:37.807 "compare": false, 00:21:37.807 "compare_and_write": false, 00:21:37.807 "abort": false, 00:21:37.807 "seek_hole": true, 00:21:37.807 "seek_data": true, 00:21:37.807 "copy": false, 00:21:37.807 "nvme_iov_md": false 00:21:37.807 }, 00:21:37.807 "driver_specific": { 00:21:37.807 "lvol": { 00:21:37.807 "lvol_store_uuid": "c8772269-5a72-4d12-a3f9-325c0f2d36a4", 00:21:37.807 "base_bdev": "nvme0n1", 00:21:37.807 "thin_provision": true, 00:21:37.807 "num_allocated_clusters": 0, 00:21:37.807 "snapshot": false, 00:21:37.807 "clone": false, 00:21:37.807 "esnap_clone": false 00:21:37.807 } 00:21:37.807 } 00:21:37.807 } 00:21:37.807 ]' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 --l2p_dram_limit 10' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:37.807 06:09:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 20536c9f-5d2e-40f3-a85b-c0ad6cf024e3 --l2p_dram_limit 10 -c nvc0n1p0 00:21:38.067 [2024-12-08 06:09:01.035585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.067 [2024-12-08 06:09:01.035668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.067 [2024-12-08 06:09:01.035690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:38.067 [2024-12-08 06:09:01.035705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.067 [2024-12-08 06:09:01.035801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.067 [2024-12-08 06:09:01.035824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.067 [2024-12-08 06:09:01.035851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:38.067 [2024-12-08 06:09:01.035868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.067 [2024-12-08 06:09:01.035915] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.067 [2024-12-08 06:09:01.036294] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.067 [2024-12-08 06:09:01.036322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.067 [2024-12-08 06:09:01.036355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.067 [2024-12-08 06:09:01.036371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:21:38.067 [2024-12-08 06:09:01.036385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.067 [2024-12-08 06:09:01.036542] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ab36fbcb-0078-48c6-9daf-10ae506c4934 00:21:38.068 [2024-12-08 06:09:01.037638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.037831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:38.068 [2024-12-08 06:09:01.037869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:38.068 [2024-12-08 06:09:01.037883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.042659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.042701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.068 [2024-12-08 06:09:01.042738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.690 ms 00:21:38.068 [2024-12-08 06:09:01.042749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.042885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.042904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.068 [2024-12-08 06:09:01.042918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:38.068 [2024-12-08 06:09:01.042931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.043001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.043018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.068 [2024-12-08 06:09:01.043033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:38.068 [2024-12-08 06:09:01.043054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.043087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.068 [2024-12-08 06:09:01.044786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.044841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.068 [2024-12-08 06:09:01.044864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:21:38.068 [2024-12-08 06:09:01.044876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.044917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.044936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.068 [2024-12-08 06:09:01.044973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:38.068 [2024-12-08 06:09:01.044989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.045025] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:38.068 [2024-12-08 06:09:01.045181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.068 [2024-12-08 06:09:01.045218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.068 [2024-12-08 06:09:01.045252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.068 [2024-12-08 06:09:01.045270] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045286] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:38.068 [2024-12-08 06:09:01.045317] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.068 [2024-12-08 06:09:01.045343] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.068 [2024-12-08 06:09:01.045355] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.068 [2024-12-08 06:09:01.045377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.045390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.068 [2024-12-08 06:09:01.045402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:21:38.068 [2024-12-08 06:09:01.045424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.045512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.068 [2024-12-08 06:09:01.045533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.068 [2024-12-08 06:09:01.045560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:38.068 [2024-12-08 06:09:01.045572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.068 [2024-12-08 06:09:01.045664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.068 [2024-12-08 06:09:01.045686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.068 [2024-12-08 06:09:01.045698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.068 [2024-12-08 06:09:01.045743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.068 [2024-12-08 06:09:01.045777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.068 [2024-12-08 06:09:01.045798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.068 [2024-12-08 06:09:01.045811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:38.068 [2024-12-08 06:09:01.045821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.068 [2024-12-08 06:09:01.045835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.068 [2024-12-08 06:09:01.045845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:38.068 [2024-12-08 06:09:01.045856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.068 [2024-12-08 06:09:01.045878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.068 [2024-12-08 06:09:01.045912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.068 [2024-12-08 06:09:01.045945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.068 [2024-12-08 06:09:01.045977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:38.068 [2024-12-08 06:09:01.045988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.068 [2024-12-08 06:09:01.045998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.068 [2024-12-08 06:09:01.046011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:38.068 [2024-12-08 06:09:01.046021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.068 [2024-12-08 06:09:01.046032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.068 [2024-12-08 06:09:01.046042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:38.068 [2024-12-08 06:09:01.046055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.068 [2024-12-08 06:09:01.046065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.068 [2024-12-08 06:09:01.046077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:38.068 [2024-12-08 06:09:01.046086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.068 [2024-12-08 06:09:01.046097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.068 [2024-12-08 06:09:01.046108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:38.068 [2024-12-08 06:09:01.046119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.046129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.068 [2024-12-08 06:09:01.046141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:38.068 [2024-12-08 06:09:01.046150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.046161] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.068 [2024-12-08 06:09:01.046197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.068 [2024-12-08 06:09:01.046230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.068 [2024-12-08 06:09:01.046262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.068 [2024-12-08 06:09:01.046282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.068 [2024-12-08 06:09:01.046293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.068 [2024-12-08 06:09:01.046308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.068 [2024-12-08 06:09:01.046319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.068 [2024-12-08 06:09:01.046332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.068 [2024-12-08 06:09:01.046343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.068 [2024-12-08 06:09:01.046360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.068 [2024-12-08 06:09:01.046375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.068 [2024-12-08 06:09:01.046389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:38.068 [2024-12-08 06:09:01.046401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:38.068 [2024-12-08 06:09:01.046432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:38.069 [2024-12-08 06:09:01.046461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:38.069 [2024-12-08 06:09:01.046475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:38.069 [2024-12-08 06:09:01.046487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:38.069 [2024-12-08 06:09:01.046504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:38.069 [2024-12-08 06:09:01.046517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:38.069 [2024-12-08 06:09:01.046531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:38.069 [2024-12-08 06:09:01.046543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:38.069 [2024-12-08 06:09:01.046566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:38.069 [2024-12-08 06:09:01.046578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:38.069 [2024-12-08 06:09:01.046593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:38.069 [2024-12-08 06:09:01.046605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:38.069 [2024-12-08 06:09:01.046634] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.069 [2024-12-08 06:09:01.046647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.069 [2024-12-08 06:09:01.046666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.069 [2024-12-08 06:09:01.046678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.069 [2024-12-08 06:09:01.046692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.069 [2024-12-08 06:09:01.046704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.069 [2024-12-08 06:09:01.046719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.069 [2024-12-08 06:09:01.046732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.069 [2024-12-08 06:09:01.046748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:21:38.069 [2024-12-08 06:09:01.046760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.069 [2024-12-08 06:09:01.046842] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:38.069 [2024-12-08 06:09:01.046869] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:40.603 [2024-12-08 06:09:03.101542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.101889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:40.603 [2024-12-08 06:09:03.102028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2054.709 ms 00:21:40.603 [2024-12-08 06:09:03.102170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.109726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.110012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.603 [2024-12-08 06:09:03.110148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.374 ms 00:21:40.603 [2024-12-08 06:09:03.110341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.110525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.110693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:40.603 [2024-12-08 06:09:03.110818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:40.603 [2024-12-08 06:09:03.110872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.118974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.119205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.603 [2024-12-08 06:09:03.119241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.819 ms 00:21:40.603 [2024-12-08 06:09:03.119256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.119299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.119315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.603 [2024-12-08 06:09:03.119334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:40.603 [2024-12-08 06:09:03.119345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.119740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.119777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.603 [2024-12-08 06:09:03.119808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:21:40.603 [2024-12-08 06:09:03.119834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.119984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.120002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.603 [2024-12-08 06:09:03.120015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:40.603 [2024-12-08 06:09:03.120028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.136805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.136867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.603 [2024-12-08 06:09:03.136917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.745 ms 00:21:40.603 [2024-12-08 06:09:03.136935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.147335] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:40.603 [2024-12-08 06:09:03.150124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.150179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:40.603 [2024-12-08 06:09:03.150225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.042 ms 00:21:40.603 [2024-12-08 06:09:03.150271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.192347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.192444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:40.603 [2024-12-08 06:09:03.192466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.038 ms 00:21:40.603 [2024-12-08 06:09:03.192483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.192704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.192728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:40.603 [2024-12-08 06:09:03.192741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:21:40.603 [2024-12-08 06:09:03.192765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.196659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.603 [2024-12-08 06:09:03.196740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:40.603 [2024-12-08 06:09:03.196785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.867 ms 00:21:40.603 [2024-12-08 06:09:03.196799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.603 [2024-12-08 06:09:03.200056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.200117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:40.604 [2024-12-08 06:09:03.200135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:21:40.604 [2024-12-08 06:09:03.200147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.200593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.200624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:40.604 [2024-12-08 06:09:03.200639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:21:40.604 [2024-12-08 06:09:03.200656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.224209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.224326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:40.604 [2024-12-08 06:09:03.224347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.506 ms 00:21:40.604 [2024-12-08 06:09:03.224363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.228968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.229037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:40.604 [2024-12-08 06:09:03.229056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.522 ms 00:21:40.604 [2024-12-08 06:09:03.229080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.233023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.233281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:40.604 [2024-12-08 06:09:03.233310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:21:40.604 [2024-12-08 06:09:03.233327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.237811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.237873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:40.604 [2024-12-08 06:09:03.237891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.432 ms 00:21:40.604 [2024-12-08 06:09:03.237906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.237961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.237983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:40.604 [2024-12-08 06:09:03.237996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:40.604 [2024-12-08 06:09:03.238009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.238102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.604 [2024-12-08 06:09:03.238123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:40.604 [2024-12-08 06:09:03.238135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:40.604 [2024-12-08 06:09:03.238148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.604 [2024-12-08 06:09:03.239489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2203.282 ms, result 0 00:21:40.604 { 00:21:40.604 "name": "ftl0", 00:21:40.604 "uuid": "ab36fbcb-0078-48c6-9daf-10ae506c4934" 00:21:40.604 } 00:21:40.604 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:40.604 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:40.604 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:40.604 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:40.604 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:40.863 /dev/nbd0 00:21:40.863 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:40.863 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:40.863 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:21:40.863 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:40.863 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:40.863 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:41.122 1+0 records in 00:21:41.122 1+0 records out 00:21:41.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623975 s, 6.6 MB/s 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:21:41.122 06:09:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:41.122 [2024-12-08 06:09:04.018123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:41.122 [2024-12-08 06:09:04.018328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89296 ] 00:21:41.381 [2024-12-08 06:09:04.169739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.381 [2024-12-08 06:09:04.211440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.316  [2024-12-08T06:09:06.296Z] Copying: 179/1024 [MB] (179 MBps) [2024-12-08T06:09:07.673Z] Copying: 367/1024 [MB] (187 MBps) [2024-12-08T06:09:08.609Z] Copying: 557/1024 [MB] (190 MBps) [2024-12-08T06:09:09.545Z] Copying: 740/1024 [MB] (182 MBps) [2024-12-08T06:09:10.112Z] Copying: 924/1024 [MB] (183 MBps) [2024-12-08T06:09:10.112Z] Copying: 1024/1024 [MB] (average 184 MBps) 00:21:47.067 00:21:47.067 06:09:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:49.598 06:09:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:49.598 [2024-12-08 06:09:12.145345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:49.598 [2024-12-08 06:09:12.145558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89384 ] 00:21:49.598 [2024-12-08 06:09:12.295708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.598 [2024-12-08 06:09:12.339483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.535  [2024-12-08T06:09:14.516Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-08T06:09:15.465Z] Copying: 26/1024 [MB] (12 MBps) [2024-12-08T06:09:16.414Z] Copying: 39/1024 [MB] (13 MBps) [2024-12-08T06:09:17.791Z] Copying: 53/1024 [MB] (13 MBps) [2024-12-08T06:09:18.727Z] Copying: 67/1024 [MB] (13 MBps) [2024-12-08T06:09:19.662Z] Copying: 81/1024 [MB] (14 MBps) [2024-12-08T06:09:20.597Z] Copying: 96/1024 [MB] (15 MBps) [2024-12-08T06:09:21.534Z] Copying: 112/1024 [MB] (15 MBps) [2024-12-08T06:09:22.471Z] Copying: 126/1024 [MB] (14 MBps) [2024-12-08T06:09:23.410Z] Copying: 142/1024 [MB] (15 MBps) [2024-12-08T06:09:24.788Z] Copying: 157/1024 [MB] (15 MBps) [2024-12-08T06:09:25.722Z] Copying: 173/1024 [MB] (15 MBps) [2024-12-08T06:09:26.662Z] Copying: 189/1024 [MB] (15 MBps) [2024-12-08T06:09:27.597Z] Copying: 204/1024 [MB] (15 MBps) [2024-12-08T06:09:28.533Z] Copying: 220/1024 [MB] (15 MBps) [2024-12-08T06:09:29.473Z] Copying: 235/1024 [MB] (15 MBps) [2024-12-08T06:09:30.408Z] Copying: 250/1024 [MB] (15 MBps) [2024-12-08T06:09:31.807Z] Copying: 266/1024 [MB] (15 MBps) [2024-12-08T06:09:32.741Z] Copying: 281/1024 [MB] (15 MBps) [2024-12-08T06:09:33.678Z] Copying: 297/1024 [MB] (15 MBps) [2024-12-08T06:09:34.614Z] Copying: 313/1024 [MB] (15 MBps) [2024-12-08T06:09:35.549Z] Copying: 328/1024 [MB] (15 MBps) [2024-12-08T06:09:36.482Z] Copying: 344/1024 [MB] (15 MBps) [2024-12-08T06:09:37.416Z] Copying: 359/1024 [MB] (15 MBps) [2024-12-08T06:09:38.787Z] Copying: 375/1024 [MB] (15 MBps) [2024-12-08T06:09:39.721Z] Copying: 391/1024 [MB] (15 MBps) [2024-12-08T06:09:40.658Z] Copying: 406/1024 [MB] (15 MBps) [2024-12-08T06:09:41.596Z] Copying: 422/1024 [MB] (16 MBps) [2024-12-08T06:09:42.532Z] Copying: 438/1024 [MB] (15 MBps) [2024-12-08T06:09:43.469Z] Copying: 454/1024 [MB] (15 MBps) [2024-12-08T06:09:44.406Z] Copying: 469/1024 [MB] (15 MBps) [2024-12-08T06:09:45.784Z] Copying: 484/1024 [MB] (15 MBps) [2024-12-08T06:09:46.719Z] Copying: 499/1024 [MB] (15 MBps) [2024-12-08T06:09:47.655Z] Copying: 514/1024 [MB] (15 MBps) [2024-12-08T06:09:48.593Z] Copying: 530/1024 [MB] (15 MBps) [2024-12-08T06:09:49.531Z] Copying: 546/1024 [MB] (15 MBps) [2024-12-08T06:09:50.468Z] Copying: 561/1024 [MB] (14 MBps) [2024-12-08T06:09:51.403Z] Copying: 575/1024 [MB] (14 MBps) [2024-12-08T06:09:52.781Z] Copying: 591/1024 [MB] (15 MBps) [2024-12-08T06:09:53.718Z] Copying: 605/1024 [MB] (14 MBps) [2024-12-08T06:09:54.654Z] Copying: 620/1024 [MB] (15 MBps) [2024-12-08T06:09:55.602Z] Copying: 636/1024 [MB] (15 MBps) [2024-12-08T06:09:56.534Z] Copying: 652/1024 [MB] (16 MBps) [2024-12-08T06:09:57.466Z] Copying: 667/1024 [MB] (15 MBps) [2024-12-08T06:09:58.412Z] Copying: 683/1024 [MB] (15 MBps) [2024-12-08T06:09:59.788Z] Copying: 699/1024 [MB] (15 MBps) [2024-12-08T06:10:00.725Z] Copying: 715/1024 [MB] (16 MBps) [2024-12-08T06:10:01.661Z] Copying: 731/1024 [MB] (16 MBps) [2024-12-08T06:10:02.596Z] Copying: 747/1024 [MB] (16 MBps) [2024-12-08T06:10:03.531Z] Copying: 763/1024 [MB] (15 MBps) [2024-12-08T06:10:04.468Z] Copying: 779/1024 [MB] (15 MBps) [2024-12-08T06:10:05.407Z] Copying: 795/1024 [MB] (15 MBps) [2024-12-08T06:10:06.781Z] Copying: 811/1024 [MB] (15 MBps) [2024-12-08T06:10:07.714Z] Copying: 826/1024 [MB] (15 MBps) [2024-12-08T06:10:08.647Z] Copying: 842/1024 [MB] (15 MBps) [2024-12-08T06:10:09.582Z] Copying: 858/1024 [MB] (15 MBps) [2024-12-08T06:10:10.518Z] Copying: 874/1024 [MB] (16 MBps) [2024-12-08T06:10:11.480Z] Copying: 890/1024 [MB] (15 MBps) [2024-12-08T06:10:12.413Z] Copying: 905/1024 [MB] (15 MBps) [2024-12-08T06:10:13.786Z] Copying: 921/1024 [MB] (15 MBps) [2024-12-08T06:10:14.719Z] Copying: 936/1024 [MB] (15 MBps) [2024-12-08T06:10:15.656Z] Copying: 951/1024 [MB] (15 MBps) [2024-12-08T06:10:16.589Z] Copying: 967/1024 [MB] (15 MBps) [2024-12-08T06:10:17.526Z] Copying: 982/1024 [MB] (15 MBps) [2024-12-08T06:10:18.459Z] Copying: 997/1024 [MB] (15 MBps) [2024-12-08T06:10:19.395Z] Copying: 1013/1024 [MB] (15 MBps) [2024-12-08T06:10:19.395Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:22:56.350 00:22:56.350 06:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:56.350 06:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:56.610 06:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:56.869 [2024-12-08 06:10:19.878411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.878470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:56.869 [2024-12-08 06:10:19.878506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.869 [2024-12-08 06:10:19.878519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.878577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:56.869 [2024-12-08 06:10:19.879020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.879046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:56.869 [2024-12-08 06:10:19.879060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:22:56.869 [2024-12-08 06:10:19.879077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.880759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.880991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:56.869 [2024-12-08 06:10:19.881021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:22:56.869 [2024-12-08 06:10:19.881037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.899056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.899124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:56.869 [2024-12-08 06:10:19.899143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.986 ms 00:22:56.869 [2024-12-08 06:10:19.899157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.905529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.905604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:56.869 [2024-12-08 06:10:19.905628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.276 ms 00:22:56.869 [2024-12-08 06:10:19.905642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.907081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.907147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:56.869 [2024-12-08 06:10:19.907163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:22:56.869 [2024-12-08 06:10:19.907176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.911480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.911705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:56.869 [2024-12-08 06:10:19.911734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:22:56.869 [2024-12-08 06:10:19.911754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.869 [2024-12-08 06:10:19.911914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.869 [2024-12-08 06:10:19.911953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:56.869 [2024-12-08 06:10:19.911983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:56.870 [2024-12-08 06:10:19.912008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.129 [2024-12-08 06:10:19.914102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.129 [2024-12-08 06:10:19.914163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:57.129 [2024-12-08 06:10:19.914180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:22:57.129 [2024-12-08 06:10:19.914229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.129 [2024-12-08 06:10:19.915705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.129 [2024-12-08 06:10:19.915782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:57.129 [2024-12-08 06:10:19.915814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:22:57.129 [2024-12-08 06:10:19.915842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.129 [2024-12-08 06:10:19.916975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.129 [2024-12-08 06:10:19.917067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:57.129 [2024-12-08 06:10:19.917101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:22:57.129 [2024-12-08 06:10:19.917119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.129 [2024-12-08 06:10:19.918234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.129 [2024-12-08 06:10:19.918337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:57.129 [2024-12-08 06:10:19.918369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:22:57.129 [2024-12-08 06:10:19.918381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.130 [2024-12-08 06:10:19.918422] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:57.130 [2024-12-08 06:10:19.918449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.918993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:57.130 [2024-12-08 06:10:19.919708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:57.131 [2024-12-08 06:10:19.919931] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:57.131 [2024-12-08 06:10:19.919959] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ab36fbcb-0078-48c6-9daf-10ae506c4934 00:22:57.131 [2024-12-08 06:10:19.919976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:57.131 [2024-12-08 06:10:19.919990] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:57.131 [2024-12-08 06:10:19.920003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:57.131 [2024-12-08 06:10:19.920015] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:57.131 [2024-12-08 06:10:19.920030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:57.131 [2024-12-08 06:10:19.920054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:57.131 [2024-12-08 06:10:19.920068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:57.131 [2024-12-08 06:10:19.920079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:57.131 [2024-12-08 06:10:19.920091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:57.131 [2024-12-08 06:10:19.920104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.131 [2024-12-08 06:10:19.920118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:57.131 [2024-12-08 06:10:19.920132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:22:57.131 [2024-12-08 06:10:19.920146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.921603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.131 [2024-12-08 06:10:19.921795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:57.131 [2024-12-08 06:10:19.921821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:22:57.131 [2024-12-08 06:10:19.921836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.921918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.131 [2024-12-08 06:10:19.921937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:57.131 [2024-12-08 06:10:19.921951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:57.131 [2024-12-08 06:10:19.921964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.927334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.927386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.131 [2024-12-08 06:10:19.927419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.927441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.927548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.927571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.131 [2024-12-08 06:10:19.927584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.927598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.927705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.927733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.131 [2024-12-08 06:10:19.927746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.927760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.927818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.927834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.131 [2024-12-08 06:10:19.927845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.927866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.936237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.936418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:57.131 [2024-12-08 06:10:19.936440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.936455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.944101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.944421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:57.131 [2024-12-08 06:10:19.944447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.944463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.944549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.944580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:57.131 [2024-12-08 06:10:19.944593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.944607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.944725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.944747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:57.131 [2024-12-08 06:10:19.944760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.944773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.944897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.944920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:57.131 [2024-12-08 06:10:19.944934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.944973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.945050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.945075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:57.131 [2024-12-08 06:10:19.945096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.945110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.945160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.945211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.131 [2024-12-08 06:10:19.945230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.945255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.945321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.131 [2024-12-08 06:10:19.945345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.131 [2024-12-08 06:10:19.945357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.131 [2024-12-08 06:10:19.945371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.131 [2024-12-08 06:10:19.945549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.079 ms, result 0 00:22:57.131 true 00:22:57.131 06:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89163 00:22:57.131 06:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89163 00:22:57.131 06:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:57.131 [2024-12-08 06:10:20.069703] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:57.131 [2024-12-08 06:10:20.069880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90067 ] 00:22:57.390 [2024-12-08 06:10:20.214293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.390 [2024-12-08 06:10:20.249392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.326  [2024-12-08T06:10:22.307Z] Copying: 179/1024 [MB] (179 MBps) [2024-12-08T06:10:23.683Z] Copying: 358/1024 [MB] (179 MBps) [2024-12-08T06:10:24.619Z] Copying: 537/1024 [MB] (178 MBps) [2024-12-08T06:10:25.554Z] Copying: 713/1024 [MB] (176 MBps) [2024-12-08T06:10:26.118Z] Copying: 883/1024 [MB] (169 MBps) [2024-12-08T06:10:26.376Z] Copying: 1024/1024 [MB] (average 175 MBps) 00:23:03.331 00:23:03.331 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89163 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:03.331 06:10:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:03.588 [2024-12-08 06:10:26.413044] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:03.588 [2024-12-08 06:10:26.413447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90135 ] 00:23:03.588 [2024-12-08 06:10:26.560040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.588 [2024-12-08 06:10:26.596440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.844 [2024-12-08 06:10:26.684996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:03.844 [2024-12-08 06:10:26.685083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:03.844 [2024-12-08 06:10:26.750395] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:03.844 [2024-12-08 06:10:26.750677] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:03.844 [2024-12-08 06:10:26.750846] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:04.102 [2024-12-08 06:10:27.016175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.016462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:04.102 [2024-12-08 06:10:27.016494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.102 [2024-12-08 06:10:27.016524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.016618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.016640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.102 [2024-12-08 06:10:27.016673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:04.102 [2024-12-08 06:10:27.016684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.016715] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:04.102 [2024-12-08 06:10:27.016994] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:04.102 [2024-12-08 06:10:27.017039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.017050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.102 [2024-12-08 06:10:27.017062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:23:04.102 [2024-12-08 06:10:27.017073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.018146] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:04.102 [2024-12-08 06:10:27.020447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.020493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:04.102 [2024-12-08 06:10:27.020526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:23:04.102 [2024-12-08 06:10:27.020538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.020634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.020656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:04.102 [2024-12-08 06:10:27.020669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:04.102 [2024-12-08 06:10:27.020687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.025181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.025246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.102 [2024-12-08 06:10:27.025278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:23:04.102 [2024-12-08 06:10:27.025289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.025422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.025441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.102 [2024-12-08 06:10:27.025461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:04.102 [2024-12-08 06:10:27.025475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.025550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.025576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.102 [2024-12-08 06:10:27.025589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:04.102 [2024-12-08 06:10:27.025613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.025659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.102 [2024-12-08 06:10:27.027104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.027139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.102 [2024-12-08 06:10:27.027170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:23:04.102 [2024-12-08 06:10:27.027181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.027311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.102 [2024-12-08 06:10:27.027333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.102 [2024-12-08 06:10:27.027345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:04.102 [2024-12-08 06:10:27.027367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.102 [2024-12-08 06:10:27.027394] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:04.102 [2024-12-08 06:10:27.027428] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:04.102 [2024-12-08 06:10:27.027499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:04.102 [2024-12-08 06:10:27.027529] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:04.102 [2024-12-08 06:10:27.027641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:04.102 [2024-12-08 06:10:27.027656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.102 [2024-12-08 06:10:27.027670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:04.102 [2024-12-08 06:10:27.027685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.102 [2024-12-08 06:10:27.027697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.102 [2024-12-08 06:10:27.027718] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:04.103 [2024-12-08 06:10:27.027744] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.103 [2024-12-08 06:10:27.027769] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:04.103 [2024-12-08 06:10:27.027786] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:04.103 [2024-12-08 06:10:27.027797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.027808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.103 [2024-12-08 06:10:27.027826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:23:04.103 [2024-12-08 06:10:27.027836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.027924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.027937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.103 [2024-12-08 06:10:27.027947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:04.103 [2024-12-08 06:10:27.027957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.028060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.103 [2024-12-08 06:10:27.028077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.103 [2024-12-08 06:10:27.028088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.103 [2024-12-08 06:10:27.028104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.028114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.103 [2024-12-08 06:10:27.028124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.028133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:04.103 [2024-12-08 06:10:27.028143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.103 [2024-12-08 06:10:27.028153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:04.103 [2024-12-08 06:10:27.028162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.103 [2024-12-08 06:10:27.028179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.103 [2024-12-08 06:10:27.028205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:04.103 [2024-12-08 06:10:27.028215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.103 [2024-12-08 06:10:27.028458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.103 [2024-12-08 06:10:27.028521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:04.103 [2024-12-08 06:10:27.028578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.028617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.103 [2024-12-08 06:10:27.028766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:04.103 [2024-12-08 06:10:27.028920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.028972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.103 [2024-12-08 06:10:27.029011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.103 [2024-12-08 06:10:27.029178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.103 [2024-12-08 06:10:27.029264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.103 [2024-12-08 06:10:27.029402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.103 [2024-12-08 06:10:27.029454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.103 [2024-12-08 06:10:27.029630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.103 [2024-12-08 06:10:27.029800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.103 [2024-12-08 06:10:27.029865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.103 [2024-12-08 06:10:27.029877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.103 [2024-12-08 06:10:27.029898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.103 [2024-12-08 06:10:27.029909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:04.103 [2024-12-08 06:10:27.029919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.103 [2024-12-08 06:10:27.029929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:04.103 [2024-12-08 06:10:27.029939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:04.103 [2024-12-08 06:10:27.029949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:04.103 [2024-12-08 06:10:27.029969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:04.103 [2024-12-08 06:10:27.029986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.029997] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.103 [2024-12-08 06:10:27.030009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.103 [2024-12-08 06:10:27.030020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.103 [2024-12-08 06:10:27.030030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.103 [2024-12-08 06:10:27.030042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.103 [2024-12-08 06:10:27.030052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.103 [2024-12-08 06:10:27.030062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.103 [2024-12-08 06:10:27.030073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.103 [2024-12-08 06:10:27.030083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.103 [2024-12-08 06:10:27.030093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.103 [2024-12-08 06:10:27.030106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.103 [2024-12-08 06:10:27.030121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:04.103 [2024-12-08 06:10:27.030144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:04.103 [2024-12-08 06:10:27.030155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:04.103 [2024-12-08 06:10:27.030170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:04.103 [2024-12-08 06:10:27.030212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:04.103 [2024-12-08 06:10:27.030238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:04.103 [2024-12-08 06:10:27.030250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:04.103 [2024-12-08 06:10:27.030261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:04.103 [2024-12-08 06:10:27.030273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:04.103 [2024-12-08 06:10:27.030285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:04.103 [2024-12-08 06:10:27.030341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.103 [2024-12-08 06:10:27.030363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.103 [2024-12-08 06:10:27.030387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.103 [2024-12-08 06:10:27.030398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.103 [2024-12-08 06:10:27.030414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.103 [2024-12-08 06:10:27.030429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.030445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.103 [2024-12-08 06:10:27.030457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:23:04.103 [2024-12-08 06:10:27.030477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.056172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.056252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.103 [2024-12-08 06:10:27.056287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.572 ms 00:23:04.103 [2024-12-08 06:10:27.056300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.056422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.056438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:04.103 [2024-12-08 06:10:27.056452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:04.103 [2024-12-08 06:10:27.056470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.064856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.064912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.103 [2024-12-08 06:10:27.064946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.278 ms 00:23:04.103 [2024-12-08 06:10:27.064957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.065022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.065036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.103 [2024-12-08 06:10:27.065056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:04.103 [2024-12-08 06:10:27.065069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.065454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.065474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.103 [2024-12-08 06:10:27.065500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:23:04.103 [2024-12-08 06:10:27.065511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.065687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.065705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.103 [2024-12-08 06:10:27.065717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:23:04.103 [2024-12-08 06:10:27.065733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.070887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.070951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.103 [2024-12-08 06:10:27.070969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.125 ms 00:23:04.103 [2024-12-08 06:10:27.070994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.073468] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:04.103 [2024-12-08 06:10:27.073528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:04.103 [2024-12-08 06:10:27.073576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.073589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:04.103 [2024-12-08 06:10:27.073616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:23:04.103 [2024-12-08 06:10:27.073642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.090132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.090212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:04.103 [2024-12-08 06:10:27.090233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.430 ms 00:23:04.103 [2024-12-08 06:10:27.090273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.092656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.092700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:04.103 [2024-12-08 06:10:27.092749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:23:04.103 [2024-12-08 06:10:27.092760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.094512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.094572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:04.103 [2024-12-08 06:10:27.094603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:23:04.103 [2024-12-08 06:10:27.094614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.095010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.095037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:04.103 [2024-12-08 06:10:27.095058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:04.103 [2024-12-08 06:10:27.095069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.112480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.112577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:04.103 [2024-12-08 06:10:27.112614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.387 ms 00:23:04.103 [2024-12-08 06:10:27.112627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.121265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:04.103 [2024-12-08 06:10:27.124271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.124461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:04.103 [2024-12-08 06:10:27.124507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.559 ms 00:23:04.103 [2024-12-08 06:10:27.124526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.124664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.124684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:04.103 [2024-12-08 06:10:27.124698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:04.103 [2024-12-08 06:10:27.124710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.124833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.124853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:04.103 [2024-12-08 06:10:27.124865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:04.103 [2024-12-08 06:10:27.124877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.124919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.124939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:04.103 [2024-12-08 06:10:27.124952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:04.103 [2024-12-08 06:10:27.124963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.125023] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:04.103 [2024-12-08 06:10:27.125044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.125059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:04.103 [2024-12-08 06:10:27.125070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:04.103 [2024-12-08 06:10:27.125081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.128649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.128692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:04.103 [2024-12-08 06:10:27.128726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.541 ms 00:23:04.103 [2024-12-08 06:10:27.128743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.128833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.103 [2024-12-08 06:10:27.128852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:04.103 [2024-12-08 06:10:27.128869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:04.103 [2024-12-08 06:10:27.128880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.103 [2024-12-08 06:10:27.130177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.389 ms, result 0 00:23:05.478  [2024-12-08T06:10:29.459Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-08T06:10:30.437Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-08T06:10:31.372Z] Copying: 73/1024 [MB] (24 MBps) [2024-12-08T06:10:32.306Z] Copying: 98/1024 [MB] (24 MBps) [2024-12-08T06:10:33.241Z] Copying: 122/1024 [MB] (24 MBps) [2024-12-08T06:10:34.177Z] Copying: 146/1024 [MB] (24 MBps) [2024-12-08T06:10:35.552Z] Copying: 170/1024 [MB] (23 MBps) [2024-12-08T06:10:36.484Z] Copying: 193/1024 [MB] (23 MBps) [2024-12-08T06:10:37.416Z] Copying: 217/1024 [MB] (23 MBps) [2024-12-08T06:10:38.347Z] Copying: 240/1024 [MB] (23 MBps) [2024-12-08T06:10:39.279Z] Copying: 264/1024 [MB] (23 MBps) [2024-12-08T06:10:40.213Z] Copying: 288/1024 [MB] (23 MBps) [2024-12-08T06:10:41.149Z] Copying: 313/1024 [MB] (24 MBps) [2024-12-08T06:10:42.527Z] Copying: 336/1024 [MB] (22 MBps) [2024-12-08T06:10:43.463Z] Copying: 359/1024 [MB] (23 MBps) [2024-12-08T06:10:44.399Z] Copying: 383/1024 [MB] (23 MBps) [2024-12-08T06:10:45.332Z] Copying: 406/1024 [MB] (23 MBps) [2024-12-08T06:10:46.264Z] Copying: 430/1024 [MB] (23 MBps) [2024-12-08T06:10:47.196Z] Copying: 454/1024 [MB] (23 MBps) [2024-12-08T06:10:48.188Z] Copying: 478/1024 [MB] (23 MBps) [2024-12-08T06:10:49.562Z] Copying: 501/1024 [MB] (23 MBps) [2024-12-08T06:10:50.495Z] Copying: 526/1024 [MB] (24 MBps) [2024-12-08T06:10:51.432Z] Copying: 550/1024 [MB] (24 MBps) [2024-12-08T06:10:52.367Z] Copying: 574/1024 [MB] (24 MBps) [2024-12-08T06:10:53.304Z] Copying: 599/1024 [MB] (24 MBps) [2024-12-08T06:10:54.240Z] Copying: 622/1024 [MB] (23 MBps) [2024-12-08T06:10:55.175Z] Copying: 647/1024 [MB] (24 MBps) [2024-12-08T06:10:56.147Z] Copying: 671/1024 [MB] (24 MBps) [2024-12-08T06:10:57.518Z] Copying: 695/1024 [MB] (24 MBps) [2024-12-08T06:10:58.451Z] Copying: 719/1024 [MB] (23 MBps) [2024-12-08T06:10:59.383Z] Copying: 744/1024 [MB] (24 MBps) [2024-12-08T06:11:00.316Z] Copying: 768/1024 [MB] (24 MBps) [2024-12-08T06:11:01.250Z] Copying: 793/1024 [MB] (24 MBps) [2024-12-08T06:11:02.185Z] Copying: 817/1024 [MB] (24 MBps) [2024-12-08T06:11:03.559Z] Copying: 842/1024 [MB] (24 MBps) [2024-12-08T06:11:04.493Z] Copying: 867/1024 [MB] (24 MBps) [2024-12-08T06:11:05.426Z] Copying: 891/1024 [MB] (24 MBps) [2024-12-08T06:11:06.360Z] Copying: 916/1024 [MB] (24 MBps) [2024-12-08T06:11:07.294Z] Copying: 940/1024 [MB] (24 MBps) [2024-12-08T06:11:08.243Z] Copying: 964/1024 [MB] (24 MBps) [2024-12-08T06:11:09.180Z] Copying: 989/1024 [MB] (24 MBps) [2024-12-08T06:11:10.558Z] Copying: 1013/1024 [MB] (24 MBps) [2024-12-08T06:11:10.816Z] Copying: 1048068/1048576 [kB] (9884 kBps) [2024-12-08T06:11:10.816Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:11:10.815773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.816035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:48.032 [2024-12-08 06:11:10.816067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:48.032 [2024-12-08 06:11:10.816081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.819873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:48.032 [2024-12-08 06:11:10.823660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.823705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:48.032 [2024-12-08 06:11:10.823733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:23:48.032 [2024-12-08 06:11:10.823745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.835718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.835809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:48.032 [2024-12-08 06:11:10.835843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.902 ms 00:23:48.032 [2024-12-08 06:11:10.835854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.858855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.858901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:48.032 [2024-12-08 06:11:10.858926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.980 ms 00:23:48.032 [2024-12-08 06:11:10.858938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.865476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.865515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:48.032 [2024-12-08 06:11:10.865547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.496 ms 00:23:48.032 [2024-12-08 06:11:10.865572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.866920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.866959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:48.032 [2024-12-08 06:11:10.866990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:23:48.032 [2024-12-08 06:11:10.867001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.870105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.870160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:48.032 [2024-12-08 06:11:10.870247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:23:48.032 [2024-12-08 06:11:10.870261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.988449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.988522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:48.032 [2024-12-08 06:11:10.988574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.147 ms 00:23:48.032 [2024-12-08 06:11:10.988587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.990489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.990524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:48.032 [2024-12-08 06:11:10.990570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:23:48.032 [2024-12-08 06:11:10.990580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.992123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.992161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:48.032 [2024-12-08 06:11:10.992193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:23:48.032 [2024-12-08 06:11:10.992239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.993410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.993618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:48.032 [2024-12-08 06:11:10.993645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:23:48.032 [2024-12-08 06:11:10.993657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.994813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.032 [2024-12-08 06:11:10.994867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:48.032 [2024-12-08 06:11:10.994899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:23:48.032 [2024-12-08 06:11:10.994910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.032 [2024-12-08 06:11:10.994946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:48.032 [2024-12-08 06:11:10.994968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:23:48.032 [2024-12-08 06:11:10.994983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.994995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:48.032 [2024-12-08 06:11:10.995328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.995999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:48.033 [2024-12-08 06:11:10.996274] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:48.033 [2024-12-08 06:11:10.996291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ab36fbcb-0078-48c6-9daf-10ae506c4934 00:23:48.033 [2024-12-08 06:11:10.996307] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:23:48.033 [2024-12-08 06:11:10.996318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:23:48.033 [2024-12-08 06:11:10.996329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:23:48.033 [2024-12-08 06:11:10.996341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:23:48.033 [2024-12-08 06:11:10.996352] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:48.033 [2024-12-08 06:11:10.996364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:48.033 [2024-12-08 06:11:10.996375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:48.033 [2024-12-08 06:11:10.996385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:48.033 [2024-12-08 06:11:10.996394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:48.033 [2024-12-08 06:11:10.996406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.033 [2024-12-08 06:11:10.996417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:48.033 [2024-12-08 06:11:10.996439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:23:48.033 [2024-12-08 06:11:10.996450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.033 [2024-12-08 06:11:10.997856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.033 [2024-12-08 06:11:10.997878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:48.033 [2024-12-08 06:11:10.997892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:23:48.033 [2024-12-08 06:11:10.997903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.033 [2024-12-08 06:11:10.997976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.033 [2024-12-08 06:11:10.997989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:48.034 [2024-12-08 06:11:10.998008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:48.034 [2024-12-08 06:11:10.998018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.002689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.002724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.034 [2024-12-08 06:11:11.002738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.002749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.002806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.002820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.034 [2024-12-08 06:11:11.002837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.002847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.002898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.002915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.034 [2024-12-08 06:11:11.002927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.002938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.002958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.002971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.034 [2024-12-08 06:11:11.002982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.002998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.011683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.011750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.034 [2024-12-08 06:11:11.011769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.011780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.018831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.018880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.034 [2024-12-08 06:11:11.018927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.018939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.019022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.034 [2024-12-08 06:11:11.019033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.019044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.019084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.034 [2024-12-08 06:11:11.019107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.019118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.019281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.034 [2024-12-08 06:11:11.019294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.019316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.019385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:48.034 [2024-12-08 06:11:11.019398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.019419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.019513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.034 [2024-12-08 06:11:11.019539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.019550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.034 [2024-12-08 06:11:11.019619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.034 [2024-12-08 06:11:11.019632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.034 [2024-12-08 06:11:11.019643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-12-08 06:11:11.019806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 206.230 ms, result 0 00:23:48.969 00:23:48.969 00:23:48.969 06:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:50.873 06:11:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:50.873 [2024-12-08 06:11:13.899618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:50.873 [2024-12-08 06:11:13.899815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90604 ] 00:23:51.132 [2024-12-08 06:11:14.050459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.132 [2024-12-08 06:11:14.094400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.392 [2024-12-08 06:11:14.188255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:51.392 [2024-12-08 06:11:14.188361] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:51.392 [2024-12-08 06:11:14.348085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.392 [2024-12-08 06:11:14.348495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:51.392 [2024-12-08 06:11:14.348537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:51.392 [2024-12-08 06:11:14.348550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.392 [2024-12-08 06:11:14.348655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.392 [2024-12-08 06:11:14.348673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.392 [2024-12-08 06:11:14.348694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:51.392 [2024-12-08 06:11:14.348715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.392 [2024-12-08 06:11:14.348745] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:51.392 [2024-12-08 06:11:14.349116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:51.392 [2024-12-08 06:11:14.349144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.392 [2024-12-08 06:11:14.349154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.392 [2024-12-08 06:11:14.349165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:23:51.392 [2024-12-08 06:11:14.349183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.392 [2024-12-08 06:11:14.350318] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:51.392 [2024-12-08 06:11:14.352665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.392 [2024-12-08 06:11:14.352702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:51.393 [2024-12-08 06:11:14.352732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:23:51.393 [2024-12-08 06:11:14.352743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.352825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.352846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:51.393 [2024-12-08 06:11:14.352857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:51.393 [2024-12-08 06:11:14.352867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.357193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.357257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.393 [2024-12-08 06:11:14.357288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:23:51.393 [2024-12-08 06:11:14.357298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.357410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.357429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.393 [2024-12-08 06:11:14.357440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:51.393 [2024-12-08 06:11:14.357450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.357518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.357534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:51.393 [2024-12-08 06:11:14.357559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:51.393 [2024-12-08 06:11:14.357569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.357622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:51.393 [2024-12-08 06:11:14.358857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.358893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.393 [2024-12-08 06:11:14.358907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:23:51.393 [2024-12-08 06:11:14.358929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.358963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.358977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:51.393 [2024-12-08 06:11:14.358988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:51.393 [2024-12-08 06:11:14.358999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.359024] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:51.393 [2024-12-08 06:11:14.359061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:51.393 [2024-12-08 06:11:14.359106] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:51.393 [2024-12-08 06:11:14.359140] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:51.393 [2024-12-08 06:11:14.359249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:51.393 [2024-12-08 06:11:14.359270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:51.393 [2024-12-08 06:11:14.359297] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:51.393 [2024-12-08 06:11:14.359311] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359327] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359337] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:51.393 [2024-12-08 06:11:14.359354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:51.393 [2024-12-08 06:11:14.359364] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:51.393 [2024-12-08 06:11:14.359380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:51.393 [2024-12-08 06:11:14.359391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.359400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:51.393 [2024-12-08 06:11:14.359410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:23:51.393 [2024-12-08 06:11:14.359419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.359542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.393 [2024-12-08 06:11:14.359562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:51.393 [2024-12-08 06:11:14.359579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:51.393 [2024-12-08 06:11:14.359589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.393 [2024-12-08 06:11:14.359693] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:51.393 [2024-12-08 06:11:14.359709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:51.393 [2024-12-08 06:11:14.359720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:51.393 [2024-12-08 06:11:14.359750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:51.393 [2024-12-08 06:11:14.359793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:51.393 [2024-12-08 06:11:14.359825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:51.393 [2024-12-08 06:11:14.359833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:51.393 [2024-12-08 06:11:14.359842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:51.393 [2024-12-08 06:11:14.359851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:51.393 [2024-12-08 06:11:14.359860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:51.393 [2024-12-08 06:11:14.359869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:51.393 [2024-12-08 06:11:14.359891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:51.393 [2024-12-08 06:11:14.359917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:51.393 [2024-12-08 06:11:14.359943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:51.393 [2024-12-08 06:11:14.359969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:51.393 [2024-12-08 06:11:14.359978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.393 [2024-12-08 06:11:14.359987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:51.393 [2024-12-08 06:11:14.359995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:51.393 [2024-12-08 06:11:14.360004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.393 [2024-12-08 06:11:14.360013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:51.393 [2024-12-08 06:11:14.360024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:51.393 [2024-12-08 06:11:14.360034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:51.393 [2024-12-08 06:11:14.360043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:51.393 [2024-12-08 06:11:14.360052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:51.393 [2024-12-08 06:11:14.360060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:51.393 [2024-12-08 06:11:14.360069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:51.393 [2024-12-08 06:11:14.360078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:51.393 [2024-12-08 06:11:14.360086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.360095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:51.393 [2024-12-08 06:11:14.360103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:51.393 [2024-12-08 06:11:14.360113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.360121] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:51.393 [2024-12-08 06:11:14.360131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:51.393 [2024-12-08 06:11:14.360140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:51.393 [2024-12-08 06:11:14.360155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.393 [2024-12-08 06:11:14.360166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:51.393 [2024-12-08 06:11:14.360178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:51.393 [2024-12-08 06:11:14.360187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:51.393 [2024-12-08 06:11:14.360196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:51.393 [2024-12-08 06:11:14.360205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:51.393 [2024-12-08 06:11:14.360214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:51.393 [2024-12-08 06:11:14.360556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:51.393 [2024-12-08 06:11:14.360663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:51.393 [2024-12-08 06:11:14.360729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:51.394 [2024-12-08 06:11:14.360778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:51.394 [2024-12-08 06:11:14.360900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:51.394 [2024-12-08 06:11:14.360954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:51.394 [2024-12-08 06:11:14.361003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:51.394 [2024-12-08 06:11:14.361051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:51.394 [2024-12-08 06:11:14.361220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:51.394 [2024-12-08 06:11:14.361344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:51.394 [2024-12-08 06:11:14.361397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:51.394 [2024-12-08 06:11:14.361537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:51.394 [2024-12-08 06:11:14.361595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:51.394 [2024-12-08 06:11:14.361696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:51.394 [2024-12-08 06:11:14.361754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:51.394 [2024-12-08 06:11:14.361893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:51.394 [2024-12-08 06:11:14.361951] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:51.394 [2024-12-08 06:11:14.362004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:51.394 [2024-12-08 06:11:14.362119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:51.394 [2024-12-08 06:11:14.362263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:51.394 [2024-12-08 06:11:14.362279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:51.394 [2024-12-08 06:11:14.362290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:51.394 [2024-12-08 06:11:14.362303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.362328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:51.394 [2024-12-08 06:11:14.362340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:23:51.394 [2024-12-08 06:11:14.362350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.378006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.378342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.394 [2024-12-08 06:11:14.378459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.564 ms 00:23:51.394 [2024-12-08 06:11:14.378508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.378664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.378748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:51.394 [2024-12-08 06:11:14.378822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:51.394 [2024-12-08 06:11:14.378861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.386962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.387199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.394 [2024-12-08 06:11:14.387339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.967 ms 00:23:51.394 [2024-12-08 06:11:14.387388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.387514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.387633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.394 [2024-12-08 06:11:14.387705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:51.394 [2024-12-08 06:11:14.387756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.388159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.388232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.394 [2024-12-08 06:11:14.388313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:23:51.394 [2024-12-08 06:11:14.388426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.388635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.388761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.394 [2024-12-08 06:11:14.388852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:51.394 [2024-12-08 06:11:14.388896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.393754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.393921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.394 [2024-12-08 06:11:14.394038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:23:51.394 [2024-12-08 06:11:14.394084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.396715] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:51.394 [2024-12-08 06:11:14.396916] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:51.394 [2024-12-08 06:11:14.397053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.397094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:51.394 [2024-12-08 06:11:14.397237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:23:51.394 [2024-12-08 06:11:14.397290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.413036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.413250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:51.394 [2024-12-08 06:11:14.413401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.674 ms 00:23:51.394 [2024-12-08 06:11:14.413452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.415517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.415672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:51.394 [2024-12-08 06:11:14.415820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:23:51.394 [2024-12-08 06:11:14.415870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.417697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.417863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:51.394 [2024-12-08 06:11:14.417990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:23:51.394 [2024-12-08 06:11:14.418011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.394 [2024-12-08 06:11:14.418512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.394 [2024-12-08 06:11:14.418544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:51.394 [2024-12-08 06:11:14.418557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:23:51.394 [2024-12-08 06:11:14.418582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.435403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.435519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:51.654 [2024-12-08 06:11:14.435557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.771 ms 00:23:51.654 [2024-12-08 06:11:14.435571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.444190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:51.654 [2024-12-08 06:11:14.446825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.446856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:51.654 [2024-12-08 06:11:14.446886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.184 ms 00:23:51.654 [2024-12-08 06:11:14.446901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.446971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.446988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:51.654 [2024-12-08 06:11:14.446999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:51.654 [2024-12-08 06:11:14.447008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.449047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.449100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:51.654 [2024-12-08 06:11:14.449131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:23:51.654 [2024-12-08 06:11:14.449147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.449251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.449281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:51.654 [2024-12-08 06:11:14.449295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:51.654 [2024-12-08 06:11:14.449306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.449354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:51.654 [2024-12-08 06:11:14.449373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.449396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:51.654 [2024-12-08 06:11:14.449412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:51.654 [2024-12-08 06:11:14.449424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.453313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.453403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:51.654 [2024-12-08 06:11:14.453419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:23:51.654 [2024-12-08 06:11:14.453440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.453519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.654 [2024-12-08 06:11:14.453564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:51.654 [2024-12-08 06:11:14.453575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:51.654 [2024-12-08 06:11:14.453584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.654 [2024-12-08 06:11:14.454844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.106 ms, result 0 00:23:53.028  [2024-12-08T06:11:17.008Z] Copying: 836/1048576 [kB] (836 kBps) [2024-12-08T06:11:17.942Z] Copying: 4168/1048576 [kB] (3332 kBps) [2024-12-08T06:11:18.877Z] Copying: 22/1024 [MB] (18 MBps) [2024-12-08T06:11:19.894Z] Copying: 50/1024 [MB] (27 MBps) [2024-12-08T06:11:20.864Z] Copying: 77/1024 [MB] (27 MBps) [2024-12-08T06:11:21.801Z] Copying: 105/1024 [MB] (27 MBps) [2024-12-08T06:11:22.736Z] Copying: 132/1024 [MB] (27 MBps) [2024-12-08T06:11:24.113Z] Copying: 159/1024 [MB] (27 MBps) [2024-12-08T06:11:24.679Z] Copying: 186/1024 [MB] (26 MBps) [2024-12-08T06:11:26.049Z] Copying: 215/1024 [MB] (28 MBps) [2024-12-08T06:11:26.983Z] Copying: 243/1024 [MB] (28 MBps) [2024-12-08T06:11:27.917Z] Copying: 271/1024 [MB] (27 MBps) [2024-12-08T06:11:28.853Z] Copying: 299/1024 [MB] (27 MBps) [2024-12-08T06:11:29.788Z] Copying: 325/1024 [MB] (26 MBps) [2024-12-08T06:11:30.746Z] Copying: 354/1024 [MB] (28 MBps) [2024-12-08T06:11:31.680Z] Copying: 383/1024 [MB] (29 MBps) [2024-12-08T06:11:33.083Z] Copying: 411/1024 [MB] (27 MBps) [2024-12-08T06:11:34.030Z] Copying: 438/1024 [MB] (27 MBps) [2024-12-08T06:11:34.968Z] Copying: 466/1024 [MB] (28 MBps) [2024-12-08T06:11:35.905Z] Copying: 495/1024 [MB] (28 MBps) [2024-12-08T06:11:36.843Z] Copying: 522/1024 [MB] (27 MBps) [2024-12-08T06:11:37.781Z] Copying: 550/1024 [MB] (27 MBps) [2024-12-08T06:11:38.720Z] Copying: 577/1024 [MB] (27 MBps) [2024-12-08T06:11:40.098Z] Copying: 605/1024 [MB] (27 MBps) [2024-12-08T06:11:41.046Z] Copying: 633/1024 [MB] (28 MBps) [2024-12-08T06:11:41.978Z] Copying: 661/1024 [MB] (28 MBps) [2024-12-08T06:11:42.914Z] Copying: 689/1024 [MB] (27 MBps) [2024-12-08T06:11:43.849Z] Copying: 717/1024 [MB] (27 MBps) [2024-12-08T06:11:44.784Z] Copying: 745/1024 [MB] (27 MBps) [2024-12-08T06:11:45.729Z] Copying: 772/1024 [MB] (27 MBps) [2024-12-08T06:11:47.105Z] Copying: 800/1024 [MB] (27 MBps) [2024-12-08T06:11:48.041Z] Copying: 828/1024 [MB] (27 MBps) [2024-12-08T06:11:49.007Z] Copying: 855/1024 [MB] (27 MBps) [2024-12-08T06:11:49.951Z] Copying: 881/1024 [MB] (26 MBps) [2024-12-08T06:11:50.889Z] Copying: 907/1024 [MB] (26 MBps) [2024-12-08T06:11:51.826Z] Copying: 934/1024 [MB] (27 MBps) [2024-12-08T06:11:52.763Z] Copying: 962/1024 [MB] (27 MBps) [2024-12-08T06:11:53.700Z] Copying: 989/1024 [MB] (27 MBps) [2024-12-08T06:11:53.959Z] Copying: 1017/1024 [MB] (27 MBps) [2024-12-08T06:11:54.219Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-08 06:11:54.170700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.170780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:31.174 [2024-12-08 06:11:54.170808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:31.174 [2024-12-08 06:11:54.170826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.170866] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:31.174 [2024-12-08 06:11:54.171382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.171408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:31.174 [2024-12-08 06:11:54.171434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:24:31.174 [2024-12-08 06:11:54.171464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.171758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.171782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:31.174 [2024-12-08 06:11:54.171799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:24:31.174 [2024-12-08 06:11:54.171815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.182154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.182392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:31.174 [2024-12-08 06:11:54.182423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.311 ms 00:24:31.174 [2024-12-08 06:11:54.182438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.189332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.189502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:31.174 [2024-12-08 06:11:54.189547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.837 ms 00:24:31.174 [2024-12-08 06:11:54.189560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.190994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.191036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:31.174 [2024-12-08 06:11:54.191066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:24:31.174 [2024-12-08 06:11:54.191077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.193748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.193791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:31.174 [2024-12-08 06:11:54.193823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:24:31.174 [2024-12-08 06:11:54.193843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.195305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.195349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:31.174 [2024-12-08 06:11:54.195364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:24:31.174 [2024-12-08 06:11:54.195390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.196985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.197212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:31.174 [2024-12-08 06:11:54.197238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:24:31.174 [2024-12-08 06:11:54.197250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.198502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.198540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:31.174 [2024-12-08 06:11:54.198569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:24:31.174 [2024-12-08 06:11:54.198580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.199731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.199790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:31.174 [2024-12-08 06:11:54.199807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:24:31.174 [2024-12-08 06:11:54.199818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.174 [2024-12-08 06:11:54.200930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.174 [2024-12-08 06:11:54.201093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:31.175 [2024-12-08 06:11:54.201120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:24:31.175 [2024-12-08 06:11:54.201132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.175 [2024-12-08 06:11:54.201175] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:31.175 [2024-12-08 06:11:54.201230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:31.175 [2024-12-08 06:11:54.201246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:31.175 [2024-12-08 06:11:54.201259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.201989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:31.175 [2024-12-08 06:11:54.202288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:31.176 [2024-12-08 06:11:54.202462] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:31.176 [2024-12-08 06:11:54.202474] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ab36fbcb-0078-48c6-9daf-10ae506c4934 00:24:31.176 [2024-12-08 06:11:54.202486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:31.176 [2024-12-08 06:11:54.202504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:24:31.176 [2024-12-08 06:11:54.202515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:24:31.176 [2024-12-08 06:11:54.202528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:24:31.176 [2024-12-08 06:11:54.202549] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:31.176 [2024-12-08 06:11:54.202561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:31.176 [2024-12-08 06:11:54.202572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:31.176 [2024-12-08 06:11:54.202582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:31.176 [2024-12-08 06:11:54.202592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:31.176 [2024-12-08 06:11:54.202603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.176 [2024-12-08 06:11:54.202615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:31.176 [2024-12-08 06:11:54.202627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:24:31.176 [2024-12-08 06:11:54.202638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.176 [2024-12-08 06:11:54.204022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.176 [2024-12-08 06:11:54.204058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.176 [2024-12-08 06:11:54.204072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:24:31.176 [2024-12-08 06:11:54.204084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.176 [2024-12-08 06:11:54.204328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.176 [2024-12-08 06:11:54.204392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.176 [2024-12-08 06:11:54.204513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:24:31.176 [2024-12-08 06:11:54.204537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.176 [2024-12-08 06:11:54.209053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.176 [2024-12-08 06:11:54.209095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.176 [2024-12-08 06:11:54.209127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.176 [2024-12-08 06:11:54.209138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.176 [2024-12-08 06:11:54.209213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.176 [2024-12-08 06:11:54.209248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.176 [2024-12-08 06:11:54.209262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.176 [2024-12-08 06:11:54.209273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.176 [2024-12-08 06:11:54.209335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.176 [2024-12-08 06:11:54.209353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.176 [2024-12-08 06:11:54.209366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.176 [2024-12-08 06:11:54.209377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.176 [2024-12-08 06:11:54.209398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.176 [2024-12-08 06:11:54.209412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.176 [2024-12-08 06:11:54.209424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.176 [2024-12-08 06:11:54.209435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.218461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.435 [2024-12-08 06:11:54.218722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.435 [2024-12-08 06:11:54.218766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.435 [2024-12-08 06:11:54.218779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.225855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.435 [2024-12-08 06:11:54.226066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.435 [2024-12-08 06:11:54.226117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.435 [2024-12-08 06:11:54.226131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.226226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.435 [2024-12-08 06:11:54.226246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.435 [2024-12-08 06:11:54.226259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.435 [2024-12-08 06:11:54.226271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.226302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.435 [2024-12-08 06:11:54.226329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.435 [2024-12-08 06:11:54.226341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.435 [2024-12-08 06:11:54.226352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.226448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.435 [2024-12-08 06:11:54.226473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.435 [2024-12-08 06:11:54.226485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.435 [2024-12-08 06:11:54.226496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.226544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.435 [2024-12-08 06:11:54.226562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:31.435 [2024-12-08 06:11:54.226574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.435 [2024-12-08 06:11:54.226585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.435 [2024-12-08 06:11:54.226641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.436 [2024-12-08 06:11:54.226663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.436 [2024-12-08 06:11:54.226675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.436 [2024-12-08 06:11:54.226687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.436 [2024-12-08 06:11:54.226783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.436 [2024-12-08 06:11:54.226803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.436 [2024-12-08 06:11:54.226824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.436 [2024-12-08 06:11:54.226835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.436 [2024-12-08 06:11:54.226974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.248 ms, result 0 00:24:31.436 00:24:31.436 00:24:31.436 06:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:33.966 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:33.966 06:11:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:33.966 [2024-12-08 06:11:56.629798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:33.966 [2024-12-08 06:11:56.630186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91030 ] 00:24:33.966 [2024-12-08 06:11:56.767696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.966 [2024-12-08 06:11:56.803746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.966 [2024-12-08 06:11:56.890640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.966 [2024-12-08 06:11:56.890732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.226 [2024-12-08 06:11:57.049742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.049807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:34.226 [2024-12-08 06:11:57.049863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:34.226 [2024-12-08 06:11:57.049875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.049940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.049958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.226 [2024-12-08 06:11:57.049987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:34.226 [2024-12-08 06:11:57.050010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.050041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:34.226 [2024-12-08 06:11:57.050360] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:34.226 [2024-12-08 06:11:57.050391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.050404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.226 [2024-12-08 06:11:57.050417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:24:34.226 [2024-12-08 06:11:57.050428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.051671] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:34.226 [2024-12-08 06:11:57.053752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.053794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:34.226 [2024-12-08 06:11:57.053827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:24:34.226 [2024-12-08 06:11:57.053839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.053912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.053943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:34.226 [2024-12-08 06:11:57.053959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:34.226 [2024-12-08 06:11:57.053978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.058463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.058536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.226 [2024-12-08 06:11:57.058567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:24:34.226 [2024-12-08 06:11:57.058578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.058702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.058726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.226 [2024-12-08 06:11:57.058739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:34.226 [2024-12-08 06:11:57.058750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.058859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.058878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:34.226 [2024-12-08 06:11:57.058891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:34.226 [2024-12-08 06:11:57.058902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.058935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:34.226 [2024-12-08 06:11:57.060498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.060535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.226 [2024-12-08 06:11:57.060565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:24:34.226 [2024-12-08 06:11:57.060575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.060611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.060636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:34.226 [2024-12-08 06:11:57.060662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:34.226 [2024-12-08 06:11:57.060672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.060700] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:34.226 [2024-12-08 06:11:57.060733] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:34.226 [2024-12-08 06:11:57.060783] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:34.226 [2024-12-08 06:11:57.060805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:34.226 [2024-12-08 06:11:57.060907] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:34.226 [2024-12-08 06:11:57.060922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:34.226 [2024-12-08 06:11:57.060936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:34.226 [2024-12-08 06:11:57.060949] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:34.226 [2024-12-08 06:11:57.060967] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:34.226 [2024-12-08 06:11:57.060978] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:34.226 [2024-12-08 06:11:57.060988] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:34.226 [2024-12-08 06:11:57.060999] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:34.226 [2024-12-08 06:11:57.061008] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:34.226 [2024-12-08 06:11:57.061019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.061029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:34.226 [2024-12-08 06:11:57.061040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:24:34.226 [2024-12-08 06:11:57.061050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.061154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.226 [2024-12-08 06:11:57.061167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:34.226 [2024-12-08 06:11:57.061182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:34.226 [2024-12-08 06:11:57.061192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.226 [2024-12-08 06:11:57.061334] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:34.226 [2024-12-08 06:11:57.061380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:34.226 [2024-12-08 06:11:57.061392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:34.226 [2024-12-08 06:11:57.061435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:34.226 [2024-12-08 06:11:57.061465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.226 [2024-12-08 06:11:57.061486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:34.226 [2024-12-08 06:11:57.061501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:34.226 [2024-12-08 06:11:57.061513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.226 [2024-12-08 06:11:57.061523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:34.226 [2024-12-08 06:11:57.061533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:34.226 [2024-12-08 06:11:57.061559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:34.226 [2024-12-08 06:11:57.061580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:34.226 [2024-12-08 06:11:57.061611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:34.226 [2024-12-08 06:11:57.061642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:34.226 [2024-12-08 06:11:57.061673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:34.226 [2024-12-08 06:11:57.061716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:34.226 [2024-12-08 06:11:57.061748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.226 [2024-12-08 06:11:57.061768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:34.226 [2024-12-08 06:11:57.061779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:34.226 [2024-12-08 06:11:57.061789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.226 [2024-12-08 06:11:57.061799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:34.226 [2024-12-08 06:11:57.061810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:34.226 [2024-12-08 06:11:57.061819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:34.226 [2024-12-08 06:11:57.061840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:34.226 [2024-12-08 06:11:57.061850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061863] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:34.226 [2024-12-08 06:11:57.061883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:34.226 [2024-12-08 06:11:57.061894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.226 [2024-12-08 06:11:57.061921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:34.226 [2024-12-08 06:11:57.061935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:34.226 [2024-12-08 06:11:57.061945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:34.226 [2024-12-08 06:11:57.061956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:34.226 [2024-12-08 06:11:57.061966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:34.226 [2024-12-08 06:11:57.061977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:34.226 [2024-12-08 06:11:57.061988] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:34.226 [2024-12-08 06:11:57.062003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.226 [2024-12-08 06:11:57.062023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:34.226 [2024-12-08 06:11:57.062035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:34.226 [2024-12-08 06:11:57.062046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:34.226 [2024-12-08 06:11:57.062057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:34.226 [2024-12-08 06:11:57.062071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:34.226 [2024-12-08 06:11:57.062084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:34.226 [2024-12-08 06:11:57.062095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:34.226 [2024-12-08 06:11:57.062106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:34.226 [2024-12-08 06:11:57.062116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:34.226 [2024-12-08 06:11:57.062139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:34.226 [2024-12-08 06:11:57.062150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:34.226 [2024-12-08 06:11:57.062162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:34.226 [2024-12-08 06:11:57.062173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:34.227 [2024-12-08 06:11:57.062184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:34.227 [2024-12-08 06:11:57.062195] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:34.227 [2024-12-08 06:11:57.062215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.227 [2024-12-08 06:11:57.062242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:34.227 [2024-12-08 06:11:57.062255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:34.227 [2024-12-08 06:11:57.062267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:34.227 [2024-12-08 06:11:57.062278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:34.227 [2024-12-08 06:11:57.062295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.062307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:34.227 [2024-12-08 06:11:57.062319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:24:34.227 [2024-12-08 06:11:57.062330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.085292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.085626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.227 [2024-12-08 06:11:57.085665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.889 ms 00:24:34.227 [2024-12-08 06:11:57.085678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.085794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.085810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:34.227 [2024-12-08 06:11:57.085823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:34.227 [2024-12-08 06:11:57.085834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.094169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.094295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:34.227 [2024-12-08 06:11:57.094314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.237 ms 00:24:34.227 [2024-12-08 06:11:57.094326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.094401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.094417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:34.227 [2024-12-08 06:11:57.094430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:34.227 [2024-12-08 06:11:57.094454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.094809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.094839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:34.227 [2024-12-08 06:11:57.094853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:24:34.227 [2024-12-08 06:11:57.094865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.095022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.095057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:34.227 [2024-12-08 06:11:57.095069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:34.227 [2024-12-08 06:11:57.095080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.100178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.100262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:34.227 [2024-12-08 06:11:57.100302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.072 ms 00:24:34.227 [2024-12-08 06:11:57.100314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.102759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:34.227 [2024-12-08 06:11:57.102810] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:34.227 [2024-12-08 06:11:57.102852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.102863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:34.227 [2024-12-08 06:11:57.102874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.396 ms 00:24:34.227 [2024-12-08 06:11:57.102885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.118142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.118233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:34.227 [2024-12-08 06:11:57.118278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.214 ms 00:24:34.227 [2024-12-08 06:11:57.118290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.120135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.120345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:34.227 [2024-12-08 06:11:57.120388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:24:34.227 [2024-12-08 06:11:57.120399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.122161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.122257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:34.227 [2024-12-08 06:11:57.122274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:24:34.227 [2024-12-08 06:11:57.122285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.122691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.122718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:34.227 [2024-12-08 06:11:57.122743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:24:34.227 [2024-12-08 06:11:57.122755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.138870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.138954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:34.227 [2024-12-08 06:11:57.138994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.091 ms 00:24:34.227 [2024-12-08 06:11:57.139005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.147293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:34.227 [2024-12-08 06:11:57.149840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.150033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:34.227 [2024-12-08 06:11:57.150078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.763 ms 00:24:34.227 [2024-12-08 06:11:57.150112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.150217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.150242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:34.227 [2024-12-08 06:11:57.150256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:34.227 [2024-12-08 06:11:57.150268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.150965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.150991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:34.227 [2024-12-08 06:11:57.151004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:24:34.227 [2024-12-08 06:11:57.151020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.151121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.151139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:34.227 [2024-12-08 06:11:57.151152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:34.227 [2024-12-08 06:11:57.151163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.151207] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:34.227 [2024-12-08 06:11:57.151224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.151253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:34.227 [2024-12-08 06:11:57.151265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:34.227 [2024-12-08 06:11:57.151275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.154847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.154890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:34.227 [2024-12-08 06:11:57.154938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:24:34.227 [2024-12-08 06:11:57.154950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.155050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.227 [2024-12-08 06:11:57.155068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:34.227 [2024-12-08 06:11:57.155107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:34.227 [2024-12-08 06:11:57.155118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.227 [2024-12-08 06:11:57.156418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.100 ms, result 0 00:24:35.602  [2024-12-08T06:11:59.585Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-08T06:12:00.521Z] Copying: 50/1024 [MB] (24 MBps) [2024-12-08T06:12:01.459Z] Copying: 75/1024 [MB] (24 MBps) [2024-12-08T06:12:02.395Z] Copying: 101/1024 [MB] (26 MBps) [2024-12-08T06:12:03.772Z] Copying: 128/1024 [MB] (26 MBps) [2024-12-08T06:12:04.708Z] Copying: 154/1024 [MB] (26 MBps) [2024-12-08T06:12:05.642Z] Copying: 181/1024 [MB] (26 MBps) [2024-12-08T06:12:06.574Z] Copying: 205/1024 [MB] (24 MBps) [2024-12-08T06:12:07.511Z] Copying: 230/1024 [MB] (25 MBps) [2024-12-08T06:12:08.448Z] Copying: 255/1024 [MB] (24 MBps) [2024-12-08T06:12:09.386Z] Copying: 280/1024 [MB] (25 MBps) [2024-12-08T06:12:10.366Z] Copying: 305/1024 [MB] (25 MBps) [2024-12-08T06:12:11.752Z] Copying: 330/1024 [MB] (24 MBps) [2024-12-08T06:12:12.691Z] Copying: 354/1024 [MB] (23 MBps) [2024-12-08T06:12:13.629Z] Copying: 379/1024 [MB] (25 MBps) [2024-12-08T06:12:14.565Z] Copying: 406/1024 [MB] (26 MBps) [2024-12-08T06:12:15.500Z] Copying: 432/1024 [MB] (26 MBps) [2024-12-08T06:12:16.447Z] Copying: 459/1024 [MB] (26 MBps) [2024-12-08T06:12:17.380Z] Copying: 484/1024 [MB] (24 MBps) [2024-12-08T06:12:18.755Z] Copying: 508/1024 [MB] (24 MBps) [2024-12-08T06:12:19.690Z] Copying: 533/1024 [MB] (24 MBps) [2024-12-08T06:12:20.627Z] Copying: 558/1024 [MB] (25 MBps) [2024-12-08T06:12:21.578Z] Copying: 586/1024 [MB] (27 MBps) [2024-12-08T06:12:22.514Z] Copying: 611/1024 [MB] (25 MBps) [2024-12-08T06:12:23.450Z] Copying: 637/1024 [MB] (26 MBps) [2024-12-08T06:12:24.386Z] Copying: 663/1024 [MB] (25 MBps) [2024-12-08T06:12:25.758Z] Copying: 690/1024 [MB] (26 MBps) [2024-12-08T06:12:26.740Z] Copying: 716/1024 [MB] (26 MBps) [2024-12-08T06:12:27.674Z] Copying: 742/1024 [MB] (26 MBps) [2024-12-08T06:12:28.625Z] Copying: 768/1024 [MB] (26 MBps) [2024-12-08T06:12:29.568Z] Copying: 794/1024 [MB] (26 MBps) [2024-12-08T06:12:30.502Z] Copying: 821/1024 [MB] (26 MBps) [2024-12-08T06:12:31.439Z] Copying: 848/1024 [MB] (26 MBps) [2024-12-08T06:12:32.376Z] Copying: 874/1024 [MB] (26 MBps) [2024-12-08T06:12:33.774Z] Copying: 900/1024 [MB] (26 MBps) [2024-12-08T06:12:34.709Z] Copying: 926/1024 [MB] (25 MBps) [2024-12-08T06:12:35.643Z] Copying: 951/1024 [MB] (25 MBps) [2024-12-08T06:12:36.576Z] Copying: 976/1024 [MB] (24 MBps) [2024-12-08T06:12:37.510Z] Copying: 1001/1024 [MB] (25 MBps) [2024-12-08T06:12:37.510Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-08 06:12:37.389317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.389393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:14.465 [2024-12-08 06:12:37.389413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:14.465 [2024-12-08 06:12:37.389424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.389454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:14.465 [2024-12-08 06:12:37.390334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.390356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:14.465 [2024-12-08 06:12:37.390369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:25:14.465 [2024-12-08 06:12:37.390380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.390630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.390655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:14.465 [2024-12-08 06:12:37.390668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:25:14.465 [2024-12-08 06:12:37.390679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.394024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.394060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:14.465 [2024-12-08 06:12:37.394089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:25:14.465 [2024-12-08 06:12:37.394098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.400391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.400419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:14.465 [2024-12-08 06:12:37.400448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.272 ms 00:25:14.465 [2024-12-08 06:12:37.400457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.401966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.402020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:14.465 [2024-12-08 06:12:37.402051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:25:14.465 [2024-12-08 06:12:37.402060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.405305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.405343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:14.465 [2024-12-08 06:12:37.405373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:25:14.465 [2024-12-08 06:12:37.405397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.407391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.407431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:14.465 [2024-12-08 06:12:37.407455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:25:14.465 [2024-12-08 06:12:37.407466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.409472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.409537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:14.465 [2024-12-08 06:12:37.409566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:25:14.465 [2024-12-08 06:12:37.409575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.411191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.411281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:14.465 [2024-12-08 06:12:37.411299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:25:14.465 [2024-12-08 06:12:37.411309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.412669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.412706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:14.465 [2024-12-08 06:12:37.412735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:25:14.465 [2024-12-08 06:12:37.412744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.413967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.465 [2024-12-08 06:12:37.414225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:14.465 [2024-12-08 06:12:37.414255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:25:14.465 [2024-12-08 06:12:37.414267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.465 [2024-12-08 06:12:37.414312] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:14.465 [2024-12-08 06:12:37.414334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:14.465 [2024-12-08 06:12:37.414358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:14.465 [2024-12-08 06:12:37.414371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:14.465 [2024-12-08 06:12:37.414463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.414992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:14.466 [2024-12-08 06:12:37.415516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:14.467 [2024-12-08 06:12:37.415527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:14.467 [2024-12-08 06:12:37.415539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:14.467 [2024-12-08 06:12:37.415550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:14.467 [2024-12-08 06:12:37.415562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:14.467 [2024-12-08 06:12:37.415582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:14.467 [2024-12-08 06:12:37.415594] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ab36fbcb-0078-48c6-9daf-10ae506c4934 00:25:14.467 [2024-12-08 06:12:37.415605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:14.467 [2024-12-08 06:12:37.415616] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:14.467 [2024-12-08 06:12:37.415626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:14.467 [2024-12-08 06:12:37.415637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:14.467 [2024-12-08 06:12:37.415660] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:14.467 [2024-12-08 06:12:37.415672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:14.467 [2024-12-08 06:12:37.415683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:14.467 [2024-12-08 06:12:37.415692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:14.467 [2024-12-08 06:12:37.415717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:14.467 [2024-12-08 06:12:37.415728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.467 [2024-12-08 06:12:37.415739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:14.467 [2024-12-08 06:12:37.415779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:25:14.467 [2024-12-08 06:12:37.415806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.417163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.467 [2024-12-08 06:12:37.417217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:14.467 [2024-12-08 06:12:37.417231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:25:14.467 [2024-12-08 06:12:37.417242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.417481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.467 [2024-12-08 06:12:37.417540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:14.467 [2024-12-08 06:12:37.417553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:14.467 [2024-12-08 06:12:37.417575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.422378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.422423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:14.467 [2024-12-08 06:12:37.422439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.422466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.422557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.422571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:14.467 [2024-12-08 06:12:37.422582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.422591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.422667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.422686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:14.467 [2024-12-08 06:12:37.422697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.422707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.422743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.422755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:14.467 [2024-12-08 06:12:37.422781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.422791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.432449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.432517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:14.467 [2024-12-08 06:12:37.432536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.432547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.439808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:14.467 [2024-12-08 06:12:37.440079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:14.467 [2024-12-08 06:12:37.440264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:14.467 [2024-12-08 06:12:37.440362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:14.467 [2024-12-08 06:12:37.440558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:14.467 [2024-12-08 06:12:37.440681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:14.467 [2024-12-08 06:12:37.440775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.467 [2024-12-08 06:12:37.440843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:14.467 [2024-12-08 06:12:37.440852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.467 [2024-12-08 06:12:37.440867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.467 [2024-12-08 06:12:37.440988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.658 ms, result 0 00:25:14.725 00:25:14.725 00:25:14.725 06:12:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:17.271 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:17.271 06:12:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:17.271 06:12:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:17.271 06:12:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:17.271 06:12:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:17.271 06:12:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:17.271 Process with pid 89163 is not found 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89163 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89163 ']' 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89163 00:25:17.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89163) - No such process 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89163 is not found' 00:25:17.271 06:12:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:17.569 Remove shared memory files 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:17.569 ************************************ 00:25:17.569 END TEST ftl_dirty_shutdown 00:25:17.569 ************************************ 00:25:17.569 00:25:17.569 real 3m44.379s 00:25:17.569 user 4m19.403s 00:25:17.569 sys 0m35.872s 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.569 06:12:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:17.569 06:12:40 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:17.569 06:12:40 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:17.569 06:12:40 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.569 06:12:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:17.569 ************************************ 00:25:17.569 START TEST ftl_upgrade_shutdown 00:25:17.569 ************************************ 00:25:17.569 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:17.569 * Looking for test storage... 00:25:17.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:17.569 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:17.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.839 --rc genhtml_branch_coverage=1 00:25:17.839 --rc genhtml_function_coverage=1 00:25:17.839 --rc genhtml_legend=1 00:25:17.839 --rc geninfo_all_blocks=1 00:25:17.839 --rc geninfo_unexecuted_blocks=1 00:25:17.839 00:25:17.839 ' 00:25:17.839 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:17.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.839 --rc genhtml_branch_coverage=1 00:25:17.839 --rc genhtml_function_coverage=1 00:25:17.839 --rc genhtml_legend=1 00:25:17.840 --rc geninfo_all_blocks=1 00:25:17.840 --rc geninfo_unexecuted_blocks=1 00:25:17.840 00:25:17.840 ' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:17.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.840 --rc genhtml_branch_coverage=1 00:25:17.840 --rc genhtml_function_coverage=1 00:25:17.840 --rc genhtml_legend=1 00:25:17.840 --rc geninfo_all_blocks=1 00:25:17.840 --rc geninfo_unexecuted_blocks=1 00:25:17.840 00:25:17.840 ' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:17.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.840 --rc genhtml_branch_coverage=1 00:25:17.840 --rc genhtml_function_coverage=1 00:25:17.840 --rc genhtml_legend=1 00:25:17.840 --rc geninfo_all_blocks=1 00:25:17.840 --rc geninfo_unexecuted_blocks=1 00:25:17.840 00:25:17.840 ' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91537 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91537 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91537 ']' 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.840 06:12:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:17.840 [2024-12-08 06:12:40.863238] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:17.840 [2024-12-08 06:12:40.863462] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91537 ] 00:25:18.120 [2024-12-08 06:12:41.013063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.120 [2024-12-08 06:12:41.051777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:19.052 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:19.309 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:19.567 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:19.567 { 00:25:19.567 "name": "basen1", 00:25:19.567 "aliases": [ 00:25:19.567 "7482fa09-ab95-4df5-84c1-c1aa932c5146" 00:25:19.567 ], 00:25:19.567 "product_name": "NVMe disk", 00:25:19.567 "block_size": 4096, 00:25:19.567 "num_blocks": 1310720, 00:25:19.567 "uuid": "7482fa09-ab95-4df5-84c1-c1aa932c5146", 00:25:19.567 "numa_id": -1, 00:25:19.567 "assigned_rate_limits": { 00:25:19.567 "rw_ios_per_sec": 0, 00:25:19.567 "rw_mbytes_per_sec": 0, 00:25:19.567 "r_mbytes_per_sec": 0, 00:25:19.567 "w_mbytes_per_sec": 0 00:25:19.567 }, 00:25:19.567 "claimed": true, 00:25:19.567 "claim_type": "read_many_write_one", 00:25:19.567 "zoned": false, 00:25:19.567 "supported_io_types": { 00:25:19.567 "read": true, 00:25:19.567 "write": true, 00:25:19.567 "unmap": true, 00:25:19.567 "flush": true, 00:25:19.567 "reset": true, 00:25:19.567 "nvme_admin": true, 00:25:19.567 "nvme_io": true, 00:25:19.567 "nvme_io_md": false, 00:25:19.567 "write_zeroes": true, 00:25:19.567 "zcopy": false, 00:25:19.567 "get_zone_info": false, 00:25:19.567 "zone_management": false, 00:25:19.567 "zone_append": false, 00:25:19.567 "compare": true, 00:25:19.567 "compare_and_write": false, 00:25:19.568 "abort": true, 00:25:19.568 "seek_hole": false, 00:25:19.568 "seek_data": false, 00:25:19.568 "copy": true, 00:25:19.568 "nvme_iov_md": false 00:25:19.568 }, 00:25:19.568 "driver_specific": { 00:25:19.568 "nvme": [ 00:25:19.568 { 00:25:19.568 "pci_address": "0000:00:11.0", 00:25:19.568 "trid": { 00:25:19.568 "trtype": "PCIe", 00:25:19.568 "traddr": "0000:00:11.0" 00:25:19.568 }, 00:25:19.568 "ctrlr_data": { 00:25:19.568 "cntlid": 0, 00:25:19.568 "vendor_id": "0x1b36", 00:25:19.568 "model_number": "QEMU NVMe Ctrl", 00:25:19.568 "serial_number": "12341", 00:25:19.568 "firmware_revision": "8.0.0", 00:25:19.568 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:19.568 "oacs": { 00:25:19.568 "security": 0, 00:25:19.568 "format": 1, 00:25:19.568 "firmware": 0, 00:25:19.568 "ns_manage": 1 00:25:19.568 }, 00:25:19.568 "multi_ctrlr": false, 00:25:19.568 "ana_reporting": false 00:25:19.568 }, 00:25:19.568 "vs": { 00:25:19.568 "nvme_version": "1.4" 00:25:19.568 }, 00:25:19.568 "ns_data": { 00:25:19.568 "id": 1, 00:25:19.568 "can_share": false 00:25:19.568 } 00:25:19.568 } 00:25:19.568 ], 00:25:19.568 "mp_policy": "active_passive" 00:25:19.568 } 00:25:19.568 } 00:25:19.568 ]' 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:19.568 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:19.825 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=c8772269-5a72-4d12-a3f9-325c0f2d36a4 00:25:19.825 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:19.825 06:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8772269-5a72-4d12-a3f9-325c0f2d36a4 00:25:20.082 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:20.339 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=98d041ff-4591-4c80-b5fa-5b882b122531 00:25:20.339 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 98d041ff-4591-4c80-b5fa-5b882b122531 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ba6079e0-1e28-450f-ba12-bc901b1d017d 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ba6079e0-1e28-450f-ba12-bc901b1d017d ]] 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ba6079e0-1e28-450f-ba12-bc901b1d017d 5120 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ba6079e0-1e28-450f-ba12-bc901b1d017d 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ba6079e0-1e28-450f-ba12-bc901b1d017d 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ba6079e0-1e28-450f-ba12-bc901b1d017d 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:20.596 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba6079e0-1e28-450f-ba12-bc901b1d017d 00:25:20.854 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:20.854 { 00:25:20.854 "name": "ba6079e0-1e28-450f-ba12-bc901b1d017d", 00:25:20.854 "aliases": [ 00:25:20.854 "lvs/basen1p0" 00:25:20.854 ], 00:25:20.854 "product_name": "Logical Volume", 00:25:20.854 "block_size": 4096, 00:25:20.854 "num_blocks": 5242880, 00:25:20.854 "uuid": "ba6079e0-1e28-450f-ba12-bc901b1d017d", 00:25:20.854 "assigned_rate_limits": { 00:25:20.854 "rw_ios_per_sec": 0, 00:25:20.854 "rw_mbytes_per_sec": 0, 00:25:20.854 "r_mbytes_per_sec": 0, 00:25:20.854 "w_mbytes_per_sec": 0 00:25:20.854 }, 00:25:20.854 "claimed": false, 00:25:20.854 "zoned": false, 00:25:20.854 "supported_io_types": { 00:25:20.854 "read": true, 00:25:20.854 "write": true, 00:25:20.854 "unmap": true, 00:25:20.854 "flush": false, 00:25:20.854 "reset": true, 00:25:20.854 "nvme_admin": false, 00:25:20.854 "nvme_io": false, 00:25:20.854 "nvme_io_md": false, 00:25:20.854 "write_zeroes": true, 00:25:20.854 "zcopy": false, 00:25:20.854 "get_zone_info": false, 00:25:20.854 "zone_management": false, 00:25:20.854 "zone_append": false, 00:25:20.854 "compare": false, 00:25:20.854 "compare_and_write": false, 00:25:20.854 "abort": false, 00:25:20.854 "seek_hole": true, 00:25:20.854 "seek_data": true, 00:25:20.854 "copy": false, 00:25:20.854 "nvme_iov_md": false 00:25:20.854 }, 00:25:20.854 "driver_specific": { 00:25:20.854 "lvol": { 00:25:20.854 "lvol_store_uuid": "98d041ff-4591-4c80-b5fa-5b882b122531", 00:25:20.854 "base_bdev": "basen1", 00:25:20.854 "thin_provision": true, 00:25:20.854 "num_allocated_clusters": 0, 00:25:20.854 "snapshot": false, 00:25:20.854 "clone": false, 00:25:20.854 "esnap_clone": false 00:25:20.854 } 00:25:20.854 } 00:25:20.854 } 00:25:20.854 ]' 00:25:20.854 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:21.112 06:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:21.370 06:12:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:21.370 06:12:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:21.370 06:12:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:21.628 06:12:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:21.628 06:12:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:21.628 06:12:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ba6079e0-1e28-450f-ba12-bc901b1d017d -c cachen1p0 --l2p_dram_limit 2 00:25:21.886 [2024-12-08 06:12:44.769663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.886 [2024-12-08 06:12:44.769752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:21.886 [2024-12-08 06:12:44.769773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:21.886 [2024-12-08 06:12:44.769787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.886 [2024-12-08 06:12:44.769869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.886 [2024-12-08 06:12:44.769892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:21.886 [2024-12-08 06:12:44.769905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:25:21.886 [2024-12-08 06:12:44.769919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.886 [2024-12-08 06:12:44.769951] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:21.886 [2024-12-08 06:12:44.770539] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:21.886 [2024-12-08 06:12:44.770627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.886 [2024-12-08 06:12:44.770769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:21.886 [2024-12-08 06:12:44.770826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:25:21.886 [2024-12-08 06:12:44.770884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.771233] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 972fe621-616f-4377-97a5-efd6dbc3ad00 00:25:21.887 [2024-12-08 06:12:44.772652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.772687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:21.887 [2024-12-08 06:12:44.772723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:21.887 [2024-12-08 06:12:44.772735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.777752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.777972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:21.887 [2024-12-08 06:12:44.778023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.953 ms 00:25:21.887 [2024-12-08 06:12:44.778037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.778122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.778142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:21.887 [2024-12-08 06:12:44.778158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:25:21.887 [2024-12-08 06:12:44.778173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.778333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.778361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:21.887 [2024-12-08 06:12:44.778378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:21.887 [2024-12-08 06:12:44.778390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.778441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:21.887 [2024-12-08 06:12:44.780166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.780235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:21.887 [2024-12-08 06:12:44.780255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.753 ms 00:25:21.887 [2024-12-08 06:12:44.780269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.780320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.780339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:21.887 [2024-12-08 06:12:44.780352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:21.887 [2024-12-08 06:12:44.780368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.780392] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:21.887 [2024-12-08 06:12:44.780571] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:21.887 [2024-12-08 06:12:44.780594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:21.887 [2024-12-08 06:12:44.780612] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:21.887 [2024-12-08 06:12:44.780628] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:21.887 [2024-12-08 06:12:44.780644] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:21.887 [2024-12-08 06:12:44.780658] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:21.887 [2024-12-08 06:12:44.780676] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:21.887 [2024-12-08 06:12:44.780688] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:21.887 [2024-12-08 06:12:44.780704] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:21.887 [2024-12-08 06:12:44.780720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.780734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:21.887 [2024-12-08 06:12:44.780747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:25:21.887 [2024-12-08 06:12:44.780772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.780869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.887 [2024-12-08 06:12:44.780889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:21.887 [2024-12-08 06:12:44.780902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:25:21.887 [2024-12-08 06:12:44.780916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.887 [2024-12-08 06:12:44.781025] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:21.887 [2024-12-08 06:12:44.781206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:21.887 [2024-12-08 06:12:44.781234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:21.887 [2024-12-08 06:12:44.781277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:21.887 [2024-12-08 06:12:44.781303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:21.887 [2024-12-08 06:12:44.781314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:21.887 [2024-12-08 06:12:44.781328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:21.887 [2024-12-08 06:12:44.781352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:21.887 [2024-12-08 06:12:44.781363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:21.887 [2024-12-08 06:12:44.781393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:21.887 [2024-12-08 06:12:44.781406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:21.887 [2024-12-08 06:12:44.781430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:21.887 [2024-12-08 06:12:44.781456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:21.887 [2024-12-08 06:12:44.781480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:21.887 [2024-12-08 06:12:44.781493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:21.887 [2024-12-08 06:12:44.781517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:21.887 [2024-12-08 06:12:44.781528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:21.887 [2024-12-08 06:12:44.781566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:21.887 [2024-12-08 06:12:44.781579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:21.887 [2024-12-08 06:12:44.781603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:21.887 [2024-12-08 06:12:44.781614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:21.887 [2024-12-08 06:12:44.781637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:21.887 [2024-12-08 06:12:44.781649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:21.887 [2024-12-08 06:12:44.781672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:21.887 [2024-12-08 06:12:44.781706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:21.887 [2024-12-08 06:12:44.781743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:21.887 [2024-12-08 06:12:44.781753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:21.887 [2024-12-08 06:12:44.781777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:21.887 [2024-12-08 06:12:44.781792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.887 [2024-12-08 06:12:44.781817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:21.887 [2024-12-08 06:12:44.781840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:21.887 [2024-12-08 06:12:44.781853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:21.887 [2024-12-08 06:12:44.781867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:21.887 [2024-12-08 06:12:44.781880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:21.887 [2024-12-08 06:12:44.781890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:21.887 [2024-12-08 06:12:44.781907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:21.887 [2024-12-08 06:12:44.781923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.887 [2024-12-08 06:12:44.781937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:21.887 [2024-12-08 06:12:44.781949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:21.887 [2024-12-08 06:12:44.781962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:21.887 [2024-12-08 06:12:44.781973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:21.887 [2024-12-08 06:12:44.781986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:21.888 [2024-12-08 06:12:44.781997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:21.888 [2024-12-08 06:12:44.782015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:21.888 [2024-12-08 06:12:44.782026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:21.888 [2024-12-08 06:12:44.782113] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:21.888 [2024-12-08 06:12:44.782126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:21.888 [2024-12-08 06:12:44.782171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:21.888 [2024-12-08 06:12:44.782197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:21.888 [2024-12-08 06:12:44.782211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:21.888 [2024-12-08 06:12:44.782227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.888 [2024-12-08 06:12:44.782239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:21.888 [2024-12-08 06:12:44.782266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.268 ms 00:25:21.888 [2024-12-08 06:12:44.782277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.888 [2024-12-08 06:12:44.782362] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:21.888 [2024-12-08 06:12:44.782382] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:23.783 [2024-12-08 06:12:46.819930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.783 [2024-12-08 06:12:46.820008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:23.784 [2024-12-08 06:12:46.820050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2037.574 ms 00:25:23.784 [2024-12-08 06:12:46.820066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.784 [2024-12-08 06:12:46.828216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.828549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:24.041 [2024-12-08 06:12:46.828607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.050 ms 00:25:24.041 [2024-12-08 06:12:46.828635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.828762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.828785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:24.041 [2024-12-08 06:12:46.828802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:24.041 [2024-12-08 06:12:46.828830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.837565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.837635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:24.041 [2024-12-08 06:12:46.837682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.666 ms 00:25:24.041 [2024-12-08 06:12:46.837695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.837748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.837773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:24.041 [2024-12-08 06:12:46.837794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:24.041 [2024-12-08 06:12:46.837806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.838264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.838300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:24.041 [2024-12-08 06:12:46.838318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:25:24.041 [2024-12-08 06:12:46.838338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.838430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.838448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:24.041 [2024-12-08 06:12:46.838464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:25:24.041 [2024-12-08 06:12:46.838479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.852631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.852695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:24.041 [2024-12-08 06:12:46.852737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.119 ms 00:25:24.041 [2024-12-08 06:12:46.852754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.863484] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:24.041 [2024-12-08 06:12:46.864411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.864619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:24.041 [2024-12-08 06:12:46.864650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.497 ms 00:25:24.041 [2024-12-08 06:12:46.864667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.875888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.875961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:24.041 [2024-12-08 06:12:46.875982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.181 ms 00:25:24.041 [2024-12-08 06:12:46.875999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.876103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.876127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:24.041 [2024-12-08 06:12:46.876141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:25:24.041 [2024-12-08 06:12:46.876156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.879425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.879505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:24.041 [2024-12-08 06:12:46.879525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.203 ms 00:25:24.041 [2024-12-08 06:12:46.879540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.882705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.882888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:24.041 [2024-12-08 06:12:46.882917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.113 ms 00:25:24.041 [2024-12-08 06:12:46.882933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.883331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.883364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:24.041 [2024-12-08 06:12:46.883381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:25:24.041 [2024-12-08 06:12:46.883398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.910637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.910720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:24.041 [2024-12-08 06:12:46.910742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.208 ms 00:25:24.041 [2024-12-08 06:12:46.910757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.915127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.915247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:24.041 [2024-12-08 06:12:46.915269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.305 ms 00:25:24.041 [2024-12-08 06:12:46.915284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.919051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.919290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:24.041 [2024-12-08 06:12:46.919320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.705 ms 00:25:24.041 [2024-12-08 06:12:46.919336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.923322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.923374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:24.041 [2024-12-08 06:12:46.923394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.932 ms 00:25:24.041 [2024-12-08 06:12:46.923411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.923490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.923516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:24.041 [2024-12-08 06:12:46.923531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:24.041 [2024-12-08 06:12:46.923545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.923641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.041 [2024-12-08 06:12:46.923661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:24.041 [2024-12-08 06:12:46.923674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:25:24.041 [2024-12-08 06:12:46.923688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.041 [2024-12-08 06:12:46.924757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2154.631 ms, result 0 00:25:24.041 { 00:25:24.041 "name": "ftl", 00:25:24.041 "uuid": "972fe621-616f-4377-97a5-efd6dbc3ad00" 00:25:24.041 } 00:25:24.041 06:12:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:24.299 [2024-12-08 06:12:47.230916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.299 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:24.556 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:24.814 [2024-12-08 06:12:47.811606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:24.814 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:25.072 [2024-12-08 06:12:48.072396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:25.072 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:25.636 Fill FTL, iteration 1 00:25:25.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91655 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91655 /var/tmp/spdk.tgt.sock 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91655 ']' 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.636 06:12:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:25.636 [2024-12-08 06:12:48.571946] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:25.636 [2024-12-08 06:12:48.572361] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91655 ] 00:25:25.893 [2024-12-08 06:12:48.718273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.893 [2024-12-08 06:12:48.761543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.827 06:12:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.827 06:12:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:26.827 06:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:26.827 ftln1 00:25:26.827 06:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:26.827 06:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91655 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91655 ']' 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91655 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91655 00:25:27.395 killing process with pid 91655 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91655' 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91655 00:25:27.395 06:12:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91655 00:25:27.653 06:12:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:27.654 06:12:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:27.654 [2024-12-08 06:12:50.614507] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:27.654 [2024-12-08 06:12:50.614686] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91696 ] 00:25:27.912 [2024-12-08 06:12:50.759173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.912 [2024-12-08 06:12:50.797048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.287  [2024-12-08T06:12:53.268Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-08T06:12:54.203Z] Copying: 417/1024 [MB] (209 MBps) [2024-12-08T06:12:55.138Z] Copying: 623/1024 [MB] (206 MBps) [2024-12-08T06:12:56.119Z] Copying: 830/1024 [MB] (207 MBps) [2024-12-08T06:12:56.389Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:25:33.344 00:25:33.344 Calculate MD5 checksum, iteration 1 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:33.344 06:12:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:33.344 [2024-12-08 06:12:56.254563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:33.344 [2024-12-08 06:12:56.254758] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91755 ] 00:25:33.603 [2024-12-08 06:12:56.402910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.603 [2024-12-08 06:12:56.440252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.978  [2024-12-08T06:12:58.957Z] Copying: 504/1024 [MB] (504 MBps) [2024-12-08T06:12:58.957Z] Copying: 1004/1024 [MB] (500 MBps) [2024-12-08T06:12:58.957Z] Copying: 1024/1024 [MB] (average 498 MBps) 00:25:35.912 00:25:35.912 06:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:35.912 06:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:38.444 Fill FTL, iteration 2 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2af14f2706f85cd336cdcfd9ebb7258f 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:38.444 06:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:38.444 [2024-12-08 06:13:01.145258] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:38.444 [2024-12-08 06:13:01.145424] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91805 ] 00:25:38.444 [2024-12-08 06:13:01.290771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.444 [2024-12-08 06:13:01.334063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.821  [2024-12-08T06:13:03.798Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-08T06:13:04.732Z] Copying: 416/1024 [MB] (208 MBps) [2024-12-08T06:13:05.670Z] Copying: 617/1024 [MB] (201 MBps) [2024-12-08T06:13:06.603Z] Copying: 820/1024 [MB] (203 MBps) [2024-12-08T06:13:06.603Z] Copying: 1022/1024 [MB] (202 MBps) [2024-12-08T06:13:06.861Z] Copying: 1024/1024 [MB] (average 203 MBps) 00:25:43.816 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:43.816 Calculate MD5 checksum, iteration 2 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:43.816 06:13:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:43.816 [2024-12-08 06:13:06.833533] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:43.816 [2024-12-08 06:13:06.833872] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91868 ] 00:25:44.075 [2024-12-08 06:13:06.975108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.075 [2024-12-08 06:13:07.011854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.450  [2024-12-08T06:13:09.431Z] Copying: 466/1024 [MB] (466 MBps) [2024-12-08T06:13:09.689Z] Copying: 947/1024 [MB] (481 MBps) [2024-12-08T06:13:09.948Z] Copying: 1024/1024 [MB] (average 474 MBps) 00:25:46.903 00:25:47.160 06:13:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:47.160 06:13:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:49.089 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:49.089 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ac8226aba082b8f3ec2fb787593fb259 00:25:49.089 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:49.089 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:49.089 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:49.348 [2024-12-08 06:13:12.282207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.348 [2024-12-08 06:13:12.282443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.348 [2024-12-08 06:13:12.282475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:49.348 [2024-12-08 06:13:12.282488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.348 [2024-12-08 06:13:12.282531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.348 [2024-12-08 06:13:12.282549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.348 [2024-12-08 06:13:12.282569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:49.348 [2024-12-08 06:13:12.282580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.348 [2024-12-08 06:13:12.282613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.348 [2024-12-08 06:13:12.282627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.348 [2024-12-08 06:13:12.282639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.348 [2024-12-08 06:13:12.282649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.348 [2024-12-08 06:13:12.282730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.524 ms, result 0 00:25:49.348 true 00:25:49.348 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.608 { 00:25:49.608 "name": "ftl", 00:25:49.608 "properties": [ 00:25:49.608 { 00:25:49.608 "name": "superblock_version", 00:25:49.608 "value": 5, 00:25:49.608 "read-only": true 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "name": "base_device", 00:25:49.608 "bands": [ 00:25:49.608 { 00:25:49.608 "id": 0, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 1, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 2, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 3, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 4, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 5, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 6, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 7, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 8, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 9, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 10, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 11, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 12, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 13, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 14, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 15, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 16, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 17, 00:25:49.608 "state": "FREE", 00:25:49.608 "validity": 0.0 00:25:49.608 } 00:25:49.608 ], 00:25:49.608 "read-only": true 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "name": "cache_device", 00:25:49.608 "type": "bdev", 00:25:49.608 "chunks": [ 00:25:49.608 { 00:25:49.608 "id": 0, 00:25:49.608 "state": "INACTIVE", 00:25:49.608 "utilization": 0.0 00:25:49.608 }, 00:25:49.608 { 00:25:49.608 "id": 1, 00:25:49.608 "state": "CLOSED", 00:25:49.609 "utilization": 1.0 00:25:49.609 }, 00:25:49.609 { 00:25:49.609 "id": 2, 00:25:49.609 "state": "CLOSED", 00:25:49.609 "utilization": 1.0 00:25:49.609 }, 00:25:49.609 { 00:25:49.609 "id": 3, 00:25:49.609 "state": "OPEN", 00:25:49.609 "utilization": 0.001953125 00:25:49.609 }, 00:25:49.609 { 00:25:49.609 "id": 4, 00:25:49.609 "state": "OPEN", 00:25:49.609 "utilization": 0.0 00:25:49.609 } 00:25:49.609 ], 00:25:49.609 "read-only": true 00:25:49.609 }, 00:25:49.609 { 00:25:49.609 "name": "verbose_mode", 00:25:49.609 "value": true, 00:25:49.609 "unit": "", 00:25:49.609 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:49.609 }, 00:25:49.609 { 00:25:49.609 "name": "prep_upgrade_on_shutdown", 00:25:49.609 "value": false, 00:25:49.609 "unit": "", 00:25:49.609 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:49.609 } 00:25:49.609 ] 00:25:49.609 } 00:25:49.609 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:49.867 [2024-12-08 06:13:12.806857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.867 [2024-12-08 06:13:12.807115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.867 [2024-12-08 06:13:12.807163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:49.867 [2024-12-08 06:13:12.807175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.867 [2024-12-08 06:13:12.807241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.867 [2024-12-08 06:13:12.807260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.867 [2024-12-08 06:13:12.807272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:49.867 [2024-12-08 06:13:12.807282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.867 [2024-12-08 06:13:12.807309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.867 [2024-12-08 06:13:12.807323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.867 [2024-12-08 06:13:12.807334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.867 [2024-12-08 06:13:12.807344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.867 [2024-12-08 06:13:12.807421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.543 ms, result 0 00:25:49.867 true 00:25:49.867 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:49.867 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.867 06:13:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:50.126 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:50.126 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:50.126 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:50.385 [2024-12-08 06:13:13.339119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.385 [2024-12-08 06:13:13.339225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:50.385 [2024-12-08 06:13:13.339263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:50.385 [2024-12-08 06:13:13.339275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.385 [2024-12-08 06:13:13.339311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.385 [2024-12-08 06:13:13.339327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:50.385 [2024-12-08 06:13:13.339356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:50.385 [2024-12-08 06:13:13.339366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.385 [2024-12-08 06:13:13.339394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.385 [2024-12-08 06:13:13.339408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:50.385 [2024-12-08 06:13:13.339420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:50.385 [2024-12-08 06:13:13.339445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.385 [2024-12-08 06:13:13.339537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.396 ms, result 0 00:25:50.385 true 00:25:50.385 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:50.644 { 00:25:50.644 "name": "ftl", 00:25:50.644 "properties": [ 00:25:50.644 { 00:25:50.644 "name": "superblock_version", 00:25:50.644 "value": 5, 00:25:50.644 "read-only": true 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "name": "base_device", 00:25:50.644 "bands": [ 00:25:50.644 { 00:25:50.644 "id": 0, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 1, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 2, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 3, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 4, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 5, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 6, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 7, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 8, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 9, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 10, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 11, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 12, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 13, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 14, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 15, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 16, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 17, 00:25:50.644 "state": "FREE", 00:25:50.644 "validity": 0.0 00:25:50.644 } 00:25:50.644 ], 00:25:50.644 "read-only": true 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "name": "cache_device", 00:25:50.644 "type": "bdev", 00:25:50.644 "chunks": [ 00:25:50.644 { 00:25:50.644 "id": 0, 00:25:50.644 "state": "INACTIVE", 00:25:50.644 "utilization": 0.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 1, 00:25:50.644 "state": "CLOSED", 00:25:50.644 "utilization": 1.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 2, 00:25:50.644 "state": "CLOSED", 00:25:50.644 "utilization": 1.0 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 3, 00:25:50.644 "state": "OPEN", 00:25:50.644 "utilization": 0.001953125 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "id": 4, 00:25:50.644 "state": "OPEN", 00:25:50.644 "utilization": 0.0 00:25:50.644 } 00:25:50.644 ], 00:25:50.644 "read-only": true 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "name": "verbose_mode", 00:25:50.644 "value": true, 00:25:50.644 "unit": "", 00:25:50.644 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:50.644 }, 00:25:50.644 { 00:25:50.644 "name": "prep_upgrade_on_shutdown", 00:25:50.644 "value": true, 00:25:50.644 "unit": "", 00:25:50.644 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:50.644 } 00:25:50.644 ] 00:25:50.644 } 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91537 ]] 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91537 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91537 ']' 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91537 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91537 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:50.644 killing process with pid 91537 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91537' 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91537 00:25:50.644 06:13:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91537 00:25:50.903 [2024-12-08 06:13:13.759858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:50.903 [2024-12-08 06:13:13.763884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.903 [2024-12-08 06:13:13.764127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:50.903 [2024-12-08 06:13:13.764296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:50.903 [2024-12-08 06:13:13.764353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.903 [2024-12-08 06:13:13.764525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:50.903 [2024-12-08 06:13:13.765163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.904 [2024-12-08 06:13:13.765376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:50.904 [2024-12-08 06:13:13.765490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.454 ms 00:25:50.904 [2024-12-08 06:13:13.765540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.887 [2024-12-08 06:13:22.333612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.887 [2024-12-08 06:13:22.333879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:00.887 [2024-12-08 06:13:22.334014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8568.009 ms 00:26:00.887 [2024-12-08 06:13:22.334168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.887 [2024-12-08 06:13:22.335529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.887 [2024-12-08 06:13:22.335700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:00.887 [2024-12-08 06:13:22.335852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.259 ms 00:26:00.887 [2024-12-08 06:13:22.335996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.887 [2024-12-08 06:13:22.337264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.887 [2024-12-08 06:13:22.337413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:00.887 [2024-12-08 06:13:22.337535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.177 ms 00:26:00.887 [2024-12-08 06:13:22.337678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.887 [2024-12-08 06:13:22.339296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.339466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:00.888 [2024-12-08 06:13:22.339590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.478 ms 00:26:00.888 [2024-12-08 06:13:22.339741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.342119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.342338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:00.888 [2024-12-08 06:13:22.342480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.272 ms 00:26:00.888 [2024-12-08 06:13:22.342536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.342764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.342908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:00.888 [2024-12-08 06:13:22.343036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:26:00.888 [2024-12-08 06:13:22.343059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.344603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.344723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:00.888 [2024-12-08 06:13:22.344849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.514 ms 00:26:00.888 [2024-12-08 06:13:22.344962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.346285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.346442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:00.888 [2024-12-08 06:13:22.346570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.237 ms 00:26:00.888 [2024-12-08 06:13:22.346689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.347955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.348114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:00.888 [2024-12-08 06:13:22.348285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:26:00.888 [2024-12-08 06:13:22.348335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.349676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.349837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:00.888 [2024-12-08 06:13:22.349987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.184 ms 00:26:00.888 [2024-12-08 06:13:22.350036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.350090] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:00.888 [2024-12-08 06:13:22.350124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:00.888 [2024-12-08 06:13:22.350158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:00.888 [2024-12-08 06:13:22.350171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:00.888 [2024-12-08 06:13:22.350184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:00.888 [2024-12-08 06:13:22.350389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:00.888 [2024-12-08 06:13:22.350401] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 972fe621-616f-4377-97a5-efd6dbc3ad00 00:26:00.888 [2024-12-08 06:13:22.350413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:00.888 [2024-12-08 06:13:22.350439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:00.888 [2024-12-08 06:13:22.350450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:00.888 [2024-12-08 06:13:22.350461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:00.888 [2024-12-08 06:13:22.350472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:00.888 [2024-12-08 06:13:22.350491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:00.888 [2024-12-08 06:13:22.350511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:00.888 [2024-12-08 06:13:22.350520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:00.888 [2024-12-08 06:13:22.350545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:00.888 [2024-12-08 06:13:22.350556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.350572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:00.888 [2024-12-08 06:13:22.350584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:26:00.888 [2024-12-08 06:13:22.350596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.352119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.352148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:00.888 [2024-12-08 06:13:22.352161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.488 ms 00:26:00.888 [2024-12-08 06:13:22.352180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.352285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.888 [2024-12-08 06:13:22.352304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:00.888 [2024-12-08 06:13:22.352316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:26:00.888 [2024-12-08 06:13:22.352327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.357728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.357928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:00.888 [2024-12-08 06:13:22.358053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.358233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.358422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.358547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:00.888 [2024-12-08 06:13:22.358689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.358756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.358932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.359022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:00.888 [2024-12-08 06:13:22.359135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.359177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.359239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.359254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:00.888 [2024-12-08 06:13:22.359266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.359277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.367990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.368303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:00.888 [2024-12-08 06:13:22.368427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.368490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.375644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.375861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:00.888 [2024-12-08 06:13:22.375977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.376086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.376241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.376299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:00.888 [2024-12-08 06:13:22.376413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.376556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.376663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.376694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:00.888 [2024-12-08 06:13:22.376709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.888 [2024-12-08 06:13:22.376732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.888 [2024-12-08 06:13:22.376828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.888 [2024-12-08 06:13:22.376848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:00.888 [2024-12-08 06:13:22.376860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.889 [2024-12-08 06:13:22.376872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.889 [2024-12-08 06:13:22.376927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.889 [2024-12-08 06:13:22.376959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:00.889 [2024-12-08 06:13:22.376971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.889 [2024-12-08 06:13:22.376982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.889 [2024-12-08 06:13:22.377041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.889 [2024-12-08 06:13:22.377057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:00.889 [2024-12-08 06:13:22.377079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.889 [2024-12-08 06:13:22.377090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.889 [2024-12-08 06:13:22.377147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.889 [2024-12-08 06:13:22.377165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:00.889 [2024-12-08 06:13:22.377176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.889 [2024-12-08 06:13:22.377187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.889 [2024-12-08 06:13:22.377394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8613.535 ms, result 0 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92046 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92046 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92046 ']' 00:26:01.841 06:13:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.842 06:13:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.842 06:13:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.842 06:13:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.842 06:13:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:01.842 [2024-12-08 06:13:24.726378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:01.842 [2024-12-08 06:13:24.726591] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92046 ] 00:26:01.842 [2024-12-08 06:13:24.875294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.102 [2024-12-08 06:13:24.912501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.361 [2024-12-08 06:13:25.173633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:02.361 [2024-12-08 06:13:25.173728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:02.361 [2024-12-08 06:13:25.322384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.322655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:02.361 [2024-12-08 06:13:25.322693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:02.361 [2024-12-08 06:13:25.322717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.322806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.322825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:02.361 [2024-12-08 06:13:25.322838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:02.361 [2024-12-08 06:13:25.322848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.322898] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:02.361 [2024-12-08 06:13:25.323179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:02.361 [2024-12-08 06:13:25.323225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.323238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:02.361 [2024-12-08 06:13:25.323255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:26:02.361 [2024-12-08 06:13:25.323266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.324471] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:02.361 [2024-12-08 06:13:25.326662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.326730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:02.361 [2024-12-08 06:13:25.326772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.193 ms 00:26:02.361 [2024-12-08 06:13:25.326787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.326857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.326876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:02.361 [2024-12-08 06:13:25.326888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:02.361 [2024-12-08 06:13:25.326899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.331258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.331302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:02.361 [2024-12-08 06:13:25.331338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.281 ms 00:26:02.361 [2024-12-08 06:13:25.331348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.331420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.331463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:02.361 [2024-12-08 06:13:25.331492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:02.361 [2024-12-08 06:13:25.331503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.331589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.331609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:02.361 [2024-12-08 06:13:25.331620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:02.361 [2024-12-08 06:13:25.331631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.331674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:02.361 [2024-12-08 06:13:25.333162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.333224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:02.361 [2024-12-08 06:13:25.333242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.483 ms 00:26:02.361 [2024-12-08 06:13:25.333252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.333344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.361 [2024-12-08 06:13:25.333374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:02.361 [2024-12-08 06:13:25.333386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:02.361 [2024-12-08 06:13:25.333402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.361 [2024-12-08 06:13:25.333460] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:02.361 [2024-12-08 06:13:25.333490] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:02.361 [2024-12-08 06:13:25.333535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:02.361 [2024-12-08 06:13:25.333567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:02.361 [2024-12-08 06:13:25.333679] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:02.361 [2024-12-08 06:13:25.333710] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:02.361 [2024-12-08 06:13:25.333733] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:02.361 [2024-12-08 06:13:25.333751] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:02.361 [2024-12-08 06:13:25.333765] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:02.361 [2024-12-08 06:13:25.333777] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:02.361 [2024-12-08 06:13:25.333789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:02.361 [2024-12-08 06:13:25.333800] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:02.362 [2024-12-08 06:13:25.333811] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:02.362 [2024-12-08 06:13:25.333833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.362 [2024-12-08 06:13:25.333845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:02.362 [2024-12-08 06:13:25.333857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.377 ms 00:26:02.362 [2024-12-08 06:13:25.333868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.362 [2024-12-08 06:13:25.333976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.362 [2024-12-08 06:13:25.333993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:02.362 [2024-12-08 06:13:25.334005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:26:02.362 [2024-12-08 06:13:25.334016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.362 [2024-12-08 06:13:25.334142] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:02.362 [2024-12-08 06:13:25.334166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:02.362 [2024-12-08 06:13:25.334187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:02.362 [2024-12-08 06:13:25.334222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:02.362 [2024-12-08 06:13:25.334288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:02.362 [2024-12-08 06:13:25.334298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:02.362 [2024-12-08 06:13:25.334309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:02.362 [2024-12-08 06:13:25.334329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:02.362 [2024-12-08 06:13:25.334339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:02.362 [2024-12-08 06:13:25.334375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:02.362 [2024-12-08 06:13:25.334385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:02.362 [2024-12-08 06:13:25.334429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:02.362 [2024-12-08 06:13:25.334440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:02.362 [2024-12-08 06:13:25.334461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:02.362 [2024-12-08 06:13:25.334470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:02.362 [2024-12-08 06:13:25.334490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:02.362 [2024-12-08 06:13:25.334499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:02.362 [2024-12-08 06:13:25.334518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:02.362 [2024-12-08 06:13:25.334527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:02.362 [2024-12-08 06:13:25.334546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:02.362 [2024-12-08 06:13:25.334556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:02.362 [2024-12-08 06:13:25.334575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:02.362 [2024-12-08 06:13:25.334587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:02.362 [2024-12-08 06:13:25.334607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:02.362 [2024-12-08 06:13:25.334635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:02.362 [2024-12-08 06:13:25.334664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:02.362 [2024-12-08 06:13:25.334673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334682] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:02.362 [2024-12-08 06:13:25.334696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:02.362 [2024-12-08 06:13:25.334706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:02.362 [2024-12-08 06:13:25.334738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:02.362 [2024-12-08 06:13:25.334748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:02.362 [2024-12-08 06:13:25.334761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:02.362 [2024-12-08 06:13:25.334772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:02.362 [2024-12-08 06:13:25.334782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:02.362 [2024-12-08 06:13:25.334792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:02.362 [2024-12-08 06:13:25.334803] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:02.362 [2024-12-08 06:13:25.334816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:02.362 [2024-12-08 06:13:25.334839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:02.362 [2024-12-08 06:13:25.334870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:02.362 [2024-12-08 06:13:25.334880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:02.362 [2024-12-08 06:13:25.334890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:02.362 [2024-12-08 06:13:25.334901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:02.362 [2024-12-08 06:13:25.334976] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:02.362 [2024-12-08 06:13:25.334988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.334999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:02.362 [2024-12-08 06:13:25.335009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:02.362 [2024-12-08 06:13:25.335019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:02.362 [2024-12-08 06:13:25.335030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:02.362 [2024-12-08 06:13:25.335052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.362 [2024-12-08 06:13:25.335063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:02.362 [2024-12-08 06:13:25.335074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.982 ms 00:26:02.362 [2024-12-08 06:13:25.335084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.362 [2024-12-08 06:13:25.335145] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:02.362 [2024-12-08 06:13:25.335162] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:04.262 [2024-12-08 06:13:27.297702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.262 [2024-12-08 06:13:27.297769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:04.262 [2024-12-08 06:13:27.297789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1962.570 ms 00:26:04.262 [2024-12-08 06:13:27.297800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.262 [2024-12-08 06:13:27.305899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.263 [2024-12-08 06:13:27.305959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:04.263 [2024-12-08 06:13:27.305997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.992 ms 00:26:04.263 [2024-12-08 06:13:27.306008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.263 [2024-12-08 06:13:27.306088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.263 [2024-12-08 06:13:27.306105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:04.263 [2024-12-08 06:13:27.306117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:04.263 [2024-12-08 06:13:27.306143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.328759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.328848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:04.522 [2024-12-08 06:13:27.328880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.511 ms 00:26:04.522 [2024-12-08 06:13:27.328898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.329006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.329032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:04.522 [2024-12-08 06:13:27.329052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:04.522 [2024-12-08 06:13:27.329088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.329684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.329745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:04.522 [2024-12-08 06:13:27.329769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:26:04.522 [2024-12-08 06:13:27.329786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.329881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.329907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:04.522 [2024-12-08 06:13:27.329926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:04.522 [2024-12-08 06:13:27.329943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.338426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.338519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:04.522 [2024-12-08 06:13:27.338560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.436 ms 00:26:04.522 [2024-12-08 06:13:27.338581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.341599] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:04.522 [2024-12-08 06:13:27.341642] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:04.522 [2024-12-08 06:13:27.341677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.341701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:04.522 [2024-12-08 06:13:27.341723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.848 ms 00:26:04.522 [2024-12-08 06:13:27.341733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.345959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.522 [2024-12-08 06:13:27.345999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:04.522 [2024-12-08 06:13:27.346039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.178 ms 00:26:04.522 [2024-12-08 06:13:27.346059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.522 [2024-12-08 06:13:27.347920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.347959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:04.523 [2024-12-08 06:13:27.347990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.812 ms 00:26:04.523 [2024-12-08 06:13:27.348000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.349667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.349705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:04.523 [2024-12-08 06:13:27.349737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.624 ms 00:26:04.523 [2024-12-08 06:13:27.349747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.350131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.350158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:04.523 [2024-12-08 06:13:27.350171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:26:04.523 [2024-12-08 06:13:27.350211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.366286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.366633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:04.523 [2024-12-08 06:13:27.366674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.030 ms 00:26:04.523 [2024-12-08 06:13:27.366688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.374392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:04.523 [2024-12-08 06:13:27.375107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.375145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:04.523 [2024-12-08 06:13:27.375162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.353 ms 00:26:04.523 [2024-12-08 06:13:27.375219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.375355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.375376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:04.523 [2024-12-08 06:13:27.375390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:04.523 [2024-12-08 06:13:27.375411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.375522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.375558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:04.523 [2024-12-08 06:13:27.375571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:04.523 [2024-12-08 06:13:27.375594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.375637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.375653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:04.523 [2024-12-08 06:13:27.375665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:04.523 [2024-12-08 06:13:27.375676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.375720] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:04.523 [2024-12-08 06:13:27.375737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.375748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:04.523 [2024-12-08 06:13:27.375760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:04.523 [2024-12-08 06:13:27.375772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.379058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.379107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:04.523 [2024-12-08 06:13:27.379141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.260 ms 00:26:04.523 [2024-12-08 06:13:27.379152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.379255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.523 [2024-12-08 06:13:27.379283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:04.523 [2024-12-08 06:13:27.379296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:04.523 [2024-12-08 06:13:27.379306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.523 [2024-12-08 06:13:27.380556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2057.652 ms, result 0 00:26:04.523 [2024-12-08 06:13:27.395926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.523 [2024-12-08 06:13:27.411948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:04.523 [2024-12-08 06:13:27.420067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:04.523 06:13:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:04.523 06:13:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:04.523 06:13:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:04.523 06:13:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:04.523 06:13:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:05.091 [2024-12-08 06:13:27.824453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.091 [2024-12-08 06:13:27.824552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:05.091 [2024-12-08 06:13:27.824586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:05.091 [2024-12-08 06:13:27.824597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.091 [2024-12-08 06:13:27.824630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.091 [2024-12-08 06:13:27.824658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:05.091 [2024-12-08 06:13:27.824685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.091 [2024-12-08 06:13:27.824696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.091 [2024-12-08 06:13:27.824724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.091 [2024-12-08 06:13:27.824743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:05.091 [2024-12-08 06:13:27.824755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:05.091 [2024-12-08 06:13:27.824765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.091 [2024-12-08 06:13:27.824838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.377 ms, result 0 00:26:05.091 true 00:26:05.091 06:13:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:05.091 { 00:26:05.091 "name": "ftl", 00:26:05.091 "properties": [ 00:26:05.091 { 00:26:05.091 "name": "superblock_version", 00:26:05.091 "value": 5, 00:26:05.091 "read-only": true 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "name": "base_device", 00:26:05.091 "bands": [ 00:26:05.091 { 00:26:05.091 "id": 0, 00:26:05.091 "state": "CLOSED", 00:26:05.091 "validity": 1.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 1, 00:26:05.091 "state": "CLOSED", 00:26:05.091 "validity": 1.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 2, 00:26:05.091 "state": "CLOSED", 00:26:05.091 "validity": 0.007843137254901933 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 3, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 4, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 5, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 6, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 7, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 8, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 9, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 10, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 11, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 12, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 13, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 14, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 15, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 16, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 17, 00:26:05.091 "state": "FREE", 00:26:05.091 "validity": 0.0 00:26:05.091 } 00:26:05.091 ], 00:26:05.091 "read-only": true 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "name": "cache_device", 00:26:05.091 "type": "bdev", 00:26:05.091 "chunks": [ 00:26:05.091 { 00:26:05.091 "id": 0, 00:26:05.091 "state": "INACTIVE", 00:26:05.091 "utilization": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 1, 00:26:05.091 "state": "OPEN", 00:26:05.091 "utilization": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 2, 00:26:05.091 "state": "OPEN", 00:26:05.091 "utilization": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 3, 00:26:05.091 "state": "FREE", 00:26:05.091 "utilization": 0.0 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "id": 4, 00:26:05.091 "state": "FREE", 00:26:05.091 "utilization": 0.0 00:26:05.091 } 00:26:05.091 ], 00:26:05.091 "read-only": true 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "name": "verbose_mode", 00:26:05.091 "value": true, 00:26:05.091 "unit": "", 00:26:05.091 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:05.091 }, 00:26:05.091 { 00:26:05.091 "name": "prep_upgrade_on_shutdown", 00:26:05.091 "value": false, 00:26:05.091 "unit": "", 00:26:05.091 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:05.091 } 00:26:05.091 ] 00:26:05.091 } 00:26:05.091 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:05.091 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:05.091 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:05.351 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:05.351 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:05.351 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:05.351 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:05.351 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:05.610 Validate MD5 checksum, iteration 1 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:05.610 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:05.871 [2024-12-08 06:13:28.723986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:05.871 [2024-12-08 06:13:28.724453] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92102 ] 00:26:05.871 [2024-12-08 06:13:28.875583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.130 [2024-12-08 06:13:28.919093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.588  [2024-12-08T06:13:31.569Z] Copying: 478/1024 [MB] (478 MBps) [2024-12-08T06:13:31.569Z] Copying: 959/1024 [MB] (481 MBps) [2024-12-08T06:13:32.136Z] Copying: 1024/1024 [MB] (average 480 MBps) 00:26:09.091 00:26:09.091 06:13:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:09.091 06:13:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:11.626 Validate MD5 checksum, iteration 2 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2af14f2706f85cd336cdcfd9ebb7258f 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2af14f2706f85cd336cdcfd9ebb7258f != \2\a\f\1\4\f\2\7\0\6\f\8\5\c\d\3\3\6\c\d\c\f\d\9\e\b\b\7\2\5\8\f ]] 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:11.626 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:11.626 [2024-12-08 06:13:34.250896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:11.626 [2024-12-08 06:13:34.251171] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92159 ] 00:26:11.626 [2024-12-08 06:13:34.399288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.626 [2024-12-08 06:13:34.442838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.012  [2024-12-08T06:13:36.993Z] Copying: 477/1024 [MB] (477 MBps) [2024-12-08T06:13:37.254Z] Copying: 928/1024 [MB] (451 MBps) [2024-12-08T06:13:37.513Z] Copying: 1024/1024 [MB] (average 466 MBps) 00:26:14.468 00:26:14.468 06:13:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:14.468 06:13:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:16.373 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ac8226aba082b8f3ec2fb787593fb259 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ac8226aba082b8f3ec2fb787593fb259 != \a\c\8\2\2\6\a\b\a\0\8\2\b\8\f\3\e\c\2\f\b\7\8\7\5\9\3\f\b\2\5\9 ]] 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92046 ]] 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92046 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92215 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.632 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92215 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92215 ']' 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.633 06:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:16.633 [2024-12-08 06:13:39.513924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:16.633 [2024-12-08 06:13:39.514108] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92215 ] 00:26:16.633 [2024-12-08 06:13:39.657484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.891 [2024-12-08 06:13:39.694459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92046 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:17.152 [2024-12-08 06:13:39.955911] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:17.152 [2024-12-08 06:13:39.955997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:17.152 [2024-12-08 06:13:40.100330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.100400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:17.152 [2024-12-08 06:13:40.100421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:17.152 [2024-12-08 06:13:40.100433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.100499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.100517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:17.152 [2024-12-08 06:13:40.100528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:17.152 [2024-12-08 06:13:40.100538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.100593] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:17.152 [2024-12-08 06:13:40.100920] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:17.152 [2024-12-08 06:13:40.100951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.100963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:17.152 [2024-12-08 06:13:40.100987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.370 ms 00:26:17.152 [2024-12-08 06:13:40.100998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.101519] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:17.152 [2024-12-08 06:13:40.104655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.104706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:17.152 [2024-12-08 06:13:40.104721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.138 ms 00:26:17.152 [2024-12-08 06:13:40.104732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.105672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.105706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:17.152 [2024-12-08 06:13:40.105720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:17.152 [2024-12-08 06:13:40.105731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.106163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.106222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:17.152 [2024-12-08 06:13:40.106256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:26:17.152 [2024-12-08 06:13:40.106267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.106314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.106351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:17.152 [2024-12-08 06:13:40.106368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:17.152 [2024-12-08 06:13:40.106379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.106413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.106438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:17.152 [2024-12-08 06:13:40.106450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:17.152 [2024-12-08 06:13:40.106467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.106505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:17.152 [2024-12-08 06:13:40.107422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.107485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:17.152 [2024-12-08 06:13:40.107498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.926 ms 00:26:17.152 [2024-12-08 06:13:40.107508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.107541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.107556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:17.152 [2024-12-08 06:13:40.107567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:17.152 [2024-12-08 06:13:40.107577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.107624] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:17.152 [2024-12-08 06:13:40.107652] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:17.152 [2024-12-08 06:13:40.107690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:17.152 [2024-12-08 06:13:40.107708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:17.152 [2024-12-08 06:13:40.107844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:17.152 [2024-12-08 06:13:40.107861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:17.152 [2024-12-08 06:13:40.107875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:17.152 [2024-12-08 06:13:40.107913] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:17.152 [2024-12-08 06:13:40.107925] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:17.152 [2024-12-08 06:13:40.107935] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:17.152 [2024-12-08 06:13:40.107945] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:17.152 [2024-12-08 06:13:40.107954] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:17.152 [2024-12-08 06:13:40.107963] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:17.152 [2024-12-08 06:13:40.107974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.107991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:17.152 [2024-12-08 06:13:40.108001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:26:17.152 [2024-12-08 06:13:40.108012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.108099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.152 [2024-12-08 06:13:40.108112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:17.152 [2024-12-08 06:13:40.108122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:26:17.152 [2024-12-08 06:13:40.108135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.152 [2024-12-08 06:13:40.108269] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:17.152 [2024-12-08 06:13:40.108288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:17.152 [2024-12-08 06:13:40.108299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:17.152 [2024-12-08 06:13:40.108319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.152 [2024-12-08 06:13:40.108336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:17.152 [2024-12-08 06:13:40.108346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:17.152 [2024-12-08 06:13:40.108356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:17.152 [2024-12-08 06:13:40.108366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:17.152 [2024-12-08 06:13:40.108375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:17.152 [2024-12-08 06:13:40.108385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.152 [2024-12-08 06:13:40.108394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:17.152 [2024-12-08 06:13:40.108408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:17.152 [2024-12-08 06:13:40.108418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.152 [2024-12-08 06:13:40.108427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:17.152 [2024-12-08 06:13:40.108437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:17.152 [2024-12-08 06:13:40.108451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.152 [2024-12-08 06:13:40.108461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:17.152 [2024-12-08 06:13:40.108470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:17.153 [2024-12-08 06:13:40.108480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:17.153 [2024-12-08 06:13:40.108499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:17.153 [2024-12-08 06:13:40.108508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:17.153 [2024-12-08 06:13:40.108526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:17.153 [2024-12-08 06:13:40.108535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:17.153 [2024-12-08 06:13:40.108553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:17.153 [2024-12-08 06:13:40.108562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:17.153 [2024-12-08 06:13:40.108596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:17.153 [2024-12-08 06:13:40.108605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:17.153 [2024-12-08 06:13:40.108627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:17.153 [2024-12-08 06:13:40.108636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:17.153 [2024-12-08 06:13:40.108653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:17.153 [2024-12-08 06:13:40.108680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:17.153 [2024-12-08 06:13:40.108707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:17.153 [2024-12-08 06:13:40.108716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108728] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:17.153 [2024-12-08 06:13:40.108738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:17.153 [2024-12-08 06:13:40.108754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:17.153 [2024-12-08 06:13:40.108776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:17.153 [2024-12-08 06:13:40.108786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:17.153 [2024-12-08 06:13:40.108795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:17.153 [2024-12-08 06:13:40.108804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:17.153 [2024-12-08 06:13:40.108813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:17.153 [2024-12-08 06:13:40.108822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:17.153 [2024-12-08 06:13:40.108833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:17.153 [2024-12-08 06:13:40.108845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:17.153 [2024-12-08 06:13:40.108866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:17.153 [2024-12-08 06:13:40.108894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:17.153 [2024-12-08 06:13:40.108904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:17.153 [2024-12-08 06:13:40.108914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:17.153 [2024-12-08 06:13:40.108923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.108985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:17.153 [2024-12-08 06:13:40.108994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:17.153 [2024-12-08 06:13:40.109005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.109016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.153 [2024-12-08 06:13:40.109025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:17.153 [2024-12-08 06:13:40.109044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:17.153 [2024-12-08 06:13:40.109054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:17.153 [2024-12-08 06:13:40.109068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.109078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:17.153 [2024-12-08 06:13:40.109096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.892 ms 00:26:17.153 [2024-12-08 06:13:40.109105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.115124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.115213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:17.153 [2024-12-08 06:13:40.115247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.930 ms 00:26:17.153 [2024-12-08 06:13:40.115257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.115312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.115331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:17.153 [2024-12-08 06:13:40.115344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:17.153 [2024-12-08 06:13:40.115355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.134651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.134733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:17.153 [2024-12-08 06:13:40.134755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.237 ms 00:26:17.153 [2024-12-08 06:13:40.134769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.134853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.134873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:17.153 [2024-12-08 06:13:40.134889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:17.153 [2024-12-08 06:13:40.134902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.135108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.135136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:17.153 [2024-12-08 06:13:40.135152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:26:17.153 [2024-12-08 06:13:40.135165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.135281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.135303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:17.153 [2024-12-08 06:13:40.135318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:17.153 [2024-12-08 06:13:40.135331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.141618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.141681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:17.153 [2024-12-08 06:13:40.141699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.251 ms 00:26:17.153 [2024-12-08 06:13:40.141714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.141880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.141907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:17.153 [2024-12-08 06:13:40.141923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:17.153 [2024-12-08 06:13:40.141936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.145761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.145815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:17.153 [2024-12-08 06:13:40.145830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.792 ms 00:26:17.153 [2024-12-08 06:13:40.145840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.147231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.153 [2024-12-08 06:13:40.147288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:17.153 [2024-12-08 06:13:40.147304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:26:17.153 [2024-12-08 06:13:40.147315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.153 [2024-12-08 06:13:40.163063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.154 [2024-12-08 06:13:40.163169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:17.154 [2024-12-08 06:13:40.163188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.666 ms 00:26:17.154 [2024-12-08 06:13:40.163199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.154 [2024-12-08 06:13:40.163401] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:17.154 [2024-12-08 06:13:40.163549] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:17.154 [2024-12-08 06:13:40.163659] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:17.154 [2024-12-08 06:13:40.163799] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:17.154 [2024-12-08 06:13:40.163811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.154 [2024-12-08 06:13:40.163822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:17.154 [2024-12-08 06:13:40.163835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:26:17.154 [2024-12-08 06:13:40.163845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.154 [2024-12-08 06:13:40.163924] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:17.154 [2024-12-08 06:13:40.163949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.154 [2024-12-08 06:13:40.163961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:17.154 [2024-12-08 06:13:40.163982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:17.154 [2024-12-08 06:13:40.163992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.154 [2024-12-08 06:13:40.166609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.154 [2024-12-08 06:13:40.166659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:17.154 [2024-12-08 06:13:40.166673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.587 ms 00:26:17.154 [2024-12-08 06:13:40.166683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.154 [2024-12-08 06:13:40.167363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.154 [2024-12-08 06:13:40.167393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:17.154 [2024-12-08 06:13:40.167406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:17.154 [2024-12-08 06:13:40.167415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.154 [2024-12-08 06:13:40.167560] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:17.154 [2024-12-08 06:13:40.167723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.154 [2024-12-08 06:13:40.167743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:17.154 [2024-12-08 06:13:40.167756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:26:17.154 [2024-12-08 06:13:40.167766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.726 [2024-12-08 06:13:40.719089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.726 [2024-12-08 06:13:40.719192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:17.726 [2024-12-08 06:13:40.719243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 550.953 ms 00:26:17.727 [2024-12-08 06:13:40.719256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.727 [2024-12-08 06:13:40.720594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.727 [2024-12-08 06:13:40.720633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:17.727 [2024-12-08 06:13:40.720648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.966 ms 00:26:17.727 [2024-12-08 06:13:40.720675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.727 [2024-12-08 06:13:40.721085] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:17.727 [2024-12-08 06:13:40.721119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.727 [2024-12-08 06:13:40.721132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:17.727 [2024-12-08 06:13:40.721144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:26:17.727 [2024-12-08 06:13:40.721156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.727 [2024-12-08 06:13:40.721247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.727 [2024-12-08 06:13:40.721268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:17.727 [2024-12-08 06:13:40.721281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:17.727 [2024-12-08 06:13:40.721292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:17.727 [2024-12-08 06:13:40.721348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 553.789 ms, result 0 00:26:17.727 [2024-12-08 06:13:40.721415] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:17.727 [2024-12-08 06:13:40.721496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:17.727 [2024-12-08 06:13:40.721508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:17.727 [2024-12-08 06:13:40.721519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:26:17.727 [2024-12-08 06:13:40.721530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.267423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.267509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:18.293 [2024-12-08 06:13:41.267529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 545.533 ms 00:26:18.293 [2024-12-08 06:13:41.267541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.268839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.268880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:18.293 [2024-12-08 06:13:41.268895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.904 ms 00:26:18.293 [2024-12-08 06:13:41.268906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.269269] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:18.293 [2024-12-08 06:13:41.269296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.269308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:18.293 [2024-12-08 06:13:41.269320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.366 ms 00:26:18.293 [2024-12-08 06:13:41.269331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.269382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.269401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:18.293 [2024-12-08 06:13:41.269422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:18.293 [2024-12-08 06:13:41.269434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.269484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 548.081 ms, result 0 00:26:18.293 [2024-12-08 06:13:41.269567] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:18.293 [2024-12-08 06:13:41.269584] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:18.293 [2024-12-08 06:13:41.269596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.269624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:18.293 [2024-12-08 06:13:41.269651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1102.068 ms 00:26:18.293 [2024-12-08 06:13:41.269677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.269719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.269735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:18.293 [2024-12-08 06:13:41.269753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:18.293 [2024-12-08 06:13:41.269765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.277626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:18.293 [2024-12-08 06:13:41.277931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.277957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:18.293 [2024-12-08 06:13:41.277970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.142 ms 00:26:18.293 [2024-12-08 06:13:41.277981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.278785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.278820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:18.293 [2024-12-08 06:13:41.278851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:26:18.293 [2024-12-08 06:13:41.278862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.281572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.281615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:18.293 [2024-12-08 06:13:41.281645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.670 ms 00:26:18.293 [2024-12-08 06:13:41.281656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.281723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.281738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:18.293 [2024-12-08 06:13:41.281750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:18.293 [2024-12-08 06:13:41.281760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.281877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.281905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:18.293 [2024-12-08 06:13:41.281916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:18.293 [2024-12-08 06:13:41.281926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.293 [2024-12-08 06:13:41.281956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.293 [2024-12-08 06:13:41.281985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:18.293 [2024-12-08 06:13:41.281995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:18.293 [2024-12-08 06:13:41.282004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.294 [2024-12-08 06:13:41.282043] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:18.294 [2024-12-08 06:13:41.282060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.294 [2024-12-08 06:13:41.282075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:18.294 [2024-12-08 06:13:41.282095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:18.294 [2024-12-08 06:13:41.282105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.294 [2024-12-08 06:13:41.282182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.294 [2024-12-08 06:13:41.282202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:18.294 [2024-12-08 06:13:41.282213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:18.294 [2024-12-08 06:13:41.282223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.294 [2024-12-08 06:13:41.283335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1182.527 ms, result 0 00:26:18.294 [2024-12-08 06:13:41.298308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.294 [2024-12-08 06:13:41.314325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:18.294 [2024-12-08 06:13:41.322425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:18.552 Validate MD5 checksum, iteration 1 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:18.552 06:13:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:18.552 [2024-12-08 06:13:41.451452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:18.552 [2024-12-08 06:13:41.451916] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92242 ] 00:26:18.811 [2024-12-08 06:13:41.602829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.811 [2024-12-08 06:13:41.645949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.185  [2024-12-08T06:13:44.164Z] Copying: 474/1024 [MB] (474 MBps) [2024-12-08T06:13:44.423Z] Copying: 947/1024 [MB] (473 MBps) [2024-12-08T06:13:46.319Z] Copying: 1024/1024 [MB] (average 475 MBps) 00:26:23.274 00:26:23.274 06:13:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:23.275 06:13:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:25.175 Validate MD5 checksum, iteration 2 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2af14f2706f85cd336cdcfd9ebb7258f 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2af14f2706f85cd336cdcfd9ebb7258f != \2\a\f\1\4\f\2\7\0\6\f\8\5\c\d\3\3\6\c\d\c\f\d\9\e\b\b\7\2\5\8\f ]] 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:25.175 06:13:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:25.434 [2024-12-08 06:13:48.286256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:25.434 [2024-12-08 06:13:48.286744] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92316 ] 00:26:25.434 [2024-12-08 06:13:48.433410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.434 [2024-12-08 06:13:48.477198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.811  [2024-12-08T06:13:51.234Z] Copying: 508/1024 [MB] (508 MBps) [2024-12-08T06:13:51.234Z] Copying: 1012/1024 [MB] (504 MBps) [2024-12-08T06:13:52.168Z] Copying: 1024/1024 [MB] (average 506 MBps) 00:26:29.123 00:26:29.123 06:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:29.123 06:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ac8226aba082b8f3ec2fb787593fb259 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ac8226aba082b8f3ec2fb787593fb259 != \a\c\8\2\2\6\a\b\a\0\8\2\b\8\f\3\e\c\2\f\b\7\8\7\5\9\3\f\b\2\5\9 ]] 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92215 ]] 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92215 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92215 ']' 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92215 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92215 00:26:31.674 killing process with pid 92215 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92215' 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92215 00:26:31.674 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92215 00:26:31.674 [2024-12-08 06:13:54.579620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:31.674 [2024-12-08 06:13:54.583817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.674 [2024-12-08 06:13:54.583862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:31.674 [2024-12-08 06:13:54.583910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:31.674 [2024-12-08 06:13:54.583921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.674 [2024-12-08 06:13:54.583949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:31.674 [2024-12-08 06:13:54.584372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.674 [2024-12-08 06:13:54.584390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:31.674 [2024-12-08 06:13:54.584401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:26:31.674 [2024-12-08 06:13:54.584411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.674 [2024-12-08 06:13:54.584635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.674 [2024-12-08 06:13:54.584652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:31.674 [2024-12-08 06:13:54.584667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:26:31.674 [2024-12-08 06:13:54.584677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.674 [2024-12-08 06:13:54.585951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.674 [2024-12-08 06:13:54.586001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:31.674 [2024-12-08 06:13:54.586017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.253 ms 00:26:31.674 [2024-12-08 06:13:54.586036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.674 [2024-12-08 06:13:54.587303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.674 [2024-12-08 06:13:54.587350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:31.675 [2024-12-08 06:13:54.587364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:26:31.675 [2024-12-08 06:13:54.587380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.588726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.588774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:31.675 [2024-12-08 06:13:54.588805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.309 ms 00:26:31.675 [2024-12-08 06:13:54.588816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.589989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.590038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:31.675 [2024-12-08 06:13:54.590069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:26:31.675 [2024-12-08 06:13:54.590103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.590237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.590257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:31.675 [2024-12-08 06:13:54.590269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:26:31.675 [2024-12-08 06:13:54.590280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.591623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.591823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:31.675 [2024-12-08 06:13:54.591879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.306 ms 00:26:31.675 [2024-12-08 06:13:54.591890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.593163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.593241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:31.675 [2024-12-08 06:13:54.593256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.229 ms 00:26:31.675 [2024-12-08 06:13:54.593265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.594503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.594578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:31.675 [2024-12-08 06:13:54.594607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:26:31.675 [2024-12-08 06:13:54.594616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.595714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.595796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:31.675 [2024-12-08 06:13:54.595826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.033 ms 00:26:31.675 [2024-12-08 06:13:54.595836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.595886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:31.675 [2024-12-08 06:13:54.595906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:31.675 [2024-12-08 06:13:54.595918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:31.675 [2024-12-08 06:13:54.595929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:31.675 [2024-12-08 06:13:54.595940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.595950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.595960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.595970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.595980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.595990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:31.675 [2024-12-08 06:13:54.596093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:31.675 [2024-12-08 06:13:54.596103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 972fe621-616f-4377-97a5-efd6dbc3ad00 00:26:31.675 [2024-12-08 06:13:54.596120] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:31.675 [2024-12-08 06:13:54.596129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:31.675 [2024-12-08 06:13:54.596138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:31.675 [2024-12-08 06:13:54.596148] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:31.675 [2024-12-08 06:13:54.596158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:31.675 [2024-12-08 06:13:54.596168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:31.675 [2024-12-08 06:13:54.596177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:31.675 [2024-12-08 06:13:54.596220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:31.675 [2024-12-08 06:13:54.596231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:31.675 [2024-12-08 06:13:54.596242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.596253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:31.675 [2024-12-08 06:13:54.596264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:26:31.675 [2024-12-08 06:13:54.596287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.597527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.597546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:31.675 [2024-12-08 06:13:54.597558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.217 ms 00:26:31.675 [2024-12-08 06:13:54.597568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.597659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.675 [2024-12-08 06:13:54.597674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:31.675 [2024-12-08 06:13:54.597685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:31.675 [2024-12-08 06:13:54.597695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.602815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.602855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:31.675 [2024-12-08 06:13:54.602885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.602895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.602942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.602955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:31.675 [2024-12-08 06:13:54.602966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.602976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.603079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.603098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:31.675 [2024-12-08 06:13:54.603109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.603118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.603141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.603153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:31.675 [2024-12-08 06:13:54.603163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.603172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.611035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.611110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:31.675 [2024-12-08 06:13:54.611142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.611152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.617499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.617548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:31.675 [2024-12-08 06:13:54.617579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.617589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.617666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.617683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:31.675 [2024-12-08 06:13:54.617693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.617703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.617769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.617785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:31.675 [2024-12-08 06:13:54.617795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.675 [2024-12-08 06:13:54.617804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.675 [2024-12-08 06:13:54.617894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.675 [2024-12-08 06:13:54.617916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:31.675 [2024-12-08 06:13:54.617935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.676 [2024-12-08 06:13:54.617944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.676 [2024-12-08 06:13:54.617988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.676 [2024-12-08 06:13:54.618004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:31.676 [2024-12-08 06:13:54.618014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.676 [2024-12-08 06:13:54.618023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.676 [2024-12-08 06:13:54.618067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.676 [2024-12-08 06:13:54.618087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:31.676 [2024-12-08 06:13:54.618097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.676 [2024-12-08 06:13:54.618106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.676 [2024-12-08 06:13:54.618170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:31.676 [2024-12-08 06:13:54.618185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:31.676 [2024-12-08 06:13:54.618195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:31.676 [2024-12-08 06:13:54.618205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.676 [2024-12-08 06:13:54.618404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 34.543 ms, result 0 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:31.935 Remove shared memory files 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92046 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:31.935 ************************************ 00:26:31.935 END TEST ftl_upgrade_shutdown 00:26:31.935 ************************************ 00:26:31.935 00:26:31.935 real 1m14.349s 00:26:31.935 user 1m44.478s 00:26:31.935 sys 0m21.748s 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:31.935 06:13:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:31.935 06:13:54 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:31.935 06:13:54 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:31.935 06:13:54 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:26:31.935 06:13:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:31.935 06:13:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:31.935 ************************************ 00:26:31.935 START TEST ftl_restore_fast 00:26:31.935 ************************************ 00:26:31.935 06:13:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:32.198 * Looking for test storage... 00:26:32.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:32.198 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.199 --rc genhtml_branch_coverage=1 00:26:32.199 --rc genhtml_function_coverage=1 00:26:32.199 --rc genhtml_legend=1 00:26:32.199 --rc geninfo_all_blocks=1 00:26:32.199 --rc geninfo_unexecuted_blocks=1 00:26:32.199 00:26:32.199 ' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.199 --rc genhtml_branch_coverage=1 00:26:32.199 --rc genhtml_function_coverage=1 00:26:32.199 --rc genhtml_legend=1 00:26:32.199 --rc geninfo_all_blocks=1 00:26:32.199 --rc geninfo_unexecuted_blocks=1 00:26:32.199 00:26:32.199 ' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.199 --rc genhtml_branch_coverage=1 00:26:32.199 --rc genhtml_function_coverage=1 00:26:32.199 --rc genhtml_legend=1 00:26:32.199 --rc geninfo_all_blocks=1 00:26:32.199 --rc geninfo_unexecuted_blocks=1 00:26:32.199 00:26:32.199 ' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.199 --rc genhtml_branch_coverage=1 00:26:32.199 --rc genhtml_function_coverage=1 00:26:32.199 --rc genhtml_legend=1 00:26:32.199 --rc geninfo_all_blocks=1 00:26:32.199 --rc geninfo_unexecuted_blocks=1 00:26:32.199 00:26:32.199 ' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.YGHuGpI4Wx 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92461 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92461 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92461 ']' 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.199 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:32.199 [2024-12-08 06:13:55.220825] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:32.199 [2024-12-08 06:13:55.220971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92461 ] 00:26:32.458 [2024-12-08 06:13:55.362638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.458 [2024-12-08 06:13:55.398446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:32.716 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:32.974 06:13:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:33.232 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:33.232 { 00:26:33.232 "name": "nvme0n1", 00:26:33.232 "aliases": [ 00:26:33.232 "9914650f-e868-44d3-9684-9cfb11bec011" 00:26:33.232 ], 00:26:33.232 "product_name": "NVMe disk", 00:26:33.232 "block_size": 4096, 00:26:33.232 "num_blocks": 1310720, 00:26:33.232 "uuid": "9914650f-e868-44d3-9684-9cfb11bec011", 00:26:33.232 "numa_id": -1, 00:26:33.232 "assigned_rate_limits": { 00:26:33.232 "rw_ios_per_sec": 0, 00:26:33.232 "rw_mbytes_per_sec": 0, 00:26:33.232 "r_mbytes_per_sec": 0, 00:26:33.232 "w_mbytes_per_sec": 0 00:26:33.232 }, 00:26:33.232 "claimed": true, 00:26:33.232 "claim_type": "read_many_write_one", 00:26:33.232 "zoned": false, 00:26:33.232 "supported_io_types": { 00:26:33.232 "read": true, 00:26:33.232 "write": true, 00:26:33.232 "unmap": true, 00:26:33.232 "flush": true, 00:26:33.232 "reset": true, 00:26:33.232 "nvme_admin": true, 00:26:33.232 "nvme_io": true, 00:26:33.232 "nvme_io_md": false, 00:26:33.232 "write_zeroes": true, 00:26:33.232 "zcopy": false, 00:26:33.232 "get_zone_info": false, 00:26:33.232 "zone_management": false, 00:26:33.232 "zone_append": false, 00:26:33.232 "compare": true, 00:26:33.232 "compare_and_write": false, 00:26:33.232 "abort": true, 00:26:33.232 "seek_hole": false, 00:26:33.232 "seek_data": false, 00:26:33.232 "copy": true, 00:26:33.232 "nvme_iov_md": false 00:26:33.232 }, 00:26:33.232 "driver_specific": { 00:26:33.232 "nvme": [ 00:26:33.232 { 00:26:33.233 "pci_address": "0000:00:11.0", 00:26:33.233 "trid": { 00:26:33.233 "trtype": "PCIe", 00:26:33.233 "traddr": "0000:00:11.0" 00:26:33.233 }, 00:26:33.233 "ctrlr_data": { 00:26:33.233 "cntlid": 0, 00:26:33.233 "vendor_id": "0x1b36", 00:26:33.233 "model_number": "QEMU NVMe Ctrl", 00:26:33.233 "serial_number": "12341", 00:26:33.233 "firmware_revision": "8.0.0", 00:26:33.233 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:33.233 "oacs": { 00:26:33.233 "security": 0, 00:26:33.233 "format": 1, 00:26:33.233 "firmware": 0, 00:26:33.233 "ns_manage": 1 00:26:33.233 }, 00:26:33.233 "multi_ctrlr": false, 00:26:33.233 "ana_reporting": false 00:26:33.233 }, 00:26:33.233 "vs": { 00:26:33.233 "nvme_version": "1.4" 00:26:33.233 }, 00:26:33.233 "ns_data": { 00:26:33.233 "id": 1, 00:26:33.233 "can_share": false 00:26:33.233 } 00:26:33.233 } 00:26:33.233 ], 00:26:33.233 "mp_policy": "active_passive" 00:26:33.233 } 00:26:33.233 } 00:26:33.233 ]' 00:26:33.233 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:33.233 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:33.233 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:33.491 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:33.750 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=98d041ff-4591-4c80-b5fa-5b882b122531 00:26:33.750 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:33.750 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 98d041ff-4591-4c80-b5fa-5b882b122531 00:26:34.008 06:13:56 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:34.265 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=a7cb78e1-1bba-4634-824c-aa84ab9fe9a9 00:26:34.265 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a7cb78e1-1bba-4634-824c-aa84ab9fe9a9 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:34.523 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:34.782 { 00:26:34.782 "name": "7e11ff89-2273-4dfe-b014-0d37b3f368b7", 00:26:34.782 "aliases": [ 00:26:34.782 "lvs/nvme0n1p0" 00:26:34.782 ], 00:26:34.782 "product_name": "Logical Volume", 00:26:34.782 "block_size": 4096, 00:26:34.782 "num_blocks": 26476544, 00:26:34.782 "uuid": "7e11ff89-2273-4dfe-b014-0d37b3f368b7", 00:26:34.782 "assigned_rate_limits": { 00:26:34.782 "rw_ios_per_sec": 0, 00:26:34.782 "rw_mbytes_per_sec": 0, 00:26:34.782 "r_mbytes_per_sec": 0, 00:26:34.782 "w_mbytes_per_sec": 0 00:26:34.782 }, 00:26:34.782 "claimed": false, 00:26:34.782 "zoned": false, 00:26:34.782 "supported_io_types": { 00:26:34.782 "read": true, 00:26:34.782 "write": true, 00:26:34.782 "unmap": true, 00:26:34.782 "flush": false, 00:26:34.782 "reset": true, 00:26:34.782 "nvme_admin": false, 00:26:34.782 "nvme_io": false, 00:26:34.782 "nvme_io_md": false, 00:26:34.782 "write_zeroes": true, 00:26:34.782 "zcopy": false, 00:26:34.782 "get_zone_info": false, 00:26:34.782 "zone_management": false, 00:26:34.782 "zone_append": false, 00:26:34.782 "compare": false, 00:26:34.782 "compare_and_write": false, 00:26:34.782 "abort": false, 00:26:34.782 "seek_hole": true, 00:26:34.782 "seek_data": true, 00:26:34.782 "copy": false, 00:26:34.782 "nvme_iov_md": false 00:26:34.782 }, 00:26:34.782 "driver_specific": { 00:26:34.782 "lvol": { 00:26:34.782 "lvol_store_uuid": "a7cb78e1-1bba-4634-824c-aa84ab9fe9a9", 00:26:34.782 "base_bdev": "nvme0n1", 00:26:34.782 "thin_provision": true, 00:26:34.782 "num_allocated_clusters": 0, 00:26:34.782 "snapshot": false, 00:26:34.782 "clone": false, 00:26:34.782 "esnap_clone": false 00:26:34.782 } 00:26:34.782 } 00:26:34.782 } 00:26:34.782 ]' 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:34.782 06:13:57 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:35.041 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:35.300 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:35.300 { 00:26:35.300 "name": "7e11ff89-2273-4dfe-b014-0d37b3f368b7", 00:26:35.300 "aliases": [ 00:26:35.300 "lvs/nvme0n1p0" 00:26:35.300 ], 00:26:35.300 "product_name": "Logical Volume", 00:26:35.300 "block_size": 4096, 00:26:35.300 "num_blocks": 26476544, 00:26:35.300 "uuid": "7e11ff89-2273-4dfe-b014-0d37b3f368b7", 00:26:35.300 "assigned_rate_limits": { 00:26:35.300 "rw_ios_per_sec": 0, 00:26:35.300 "rw_mbytes_per_sec": 0, 00:26:35.300 "r_mbytes_per_sec": 0, 00:26:35.300 "w_mbytes_per_sec": 0 00:26:35.300 }, 00:26:35.300 "claimed": false, 00:26:35.300 "zoned": false, 00:26:35.300 "supported_io_types": { 00:26:35.300 "read": true, 00:26:35.300 "write": true, 00:26:35.300 "unmap": true, 00:26:35.300 "flush": false, 00:26:35.300 "reset": true, 00:26:35.300 "nvme_admin": false, 00:26:35.300 "nvme_io": false, 00:26:35.300 "nvme_io_md": false, 00:26:35.300 "write_zeroes": true, 00:26:35.300 "zcopy": false, 00:26:35.300 "get_zone_info": false, 00:26:35.300 "zone_management": false, 00:26:35.300 "zone_append": false, 00:26:35.300 "compare": false, 00:26:35.300 "compare_and_write": false, 00:26:35.300 "abort": false, 00:26:35.300 "seek_hole": true, 00:26:35.300 "seek_data": true, 00:26:35.300 "copy": false, 00:26:35.300 "nvme_iov_md": false 00:26:35.300 }, 00:26:35.300 "driver_specific": { 00:26:35.300 "lvol": { 00:26:35.300 "lvol_store_uuid": "a7cb78e1-1bba-4634-824c-aa84ab9fe9a9", 00:26:35.300 "base_bdev": "nvme0n1", 00:26:35.300 "thin_provision": true, 00:26:35.300 "num_allocated_clusters": 0, 00:26:35.300 "snapshot": false, 00:26:35.300 "clone": false, 00:26:35.300 "esnap_clone": false 00:26:35.300 } 00:26:35.300 } 00:26:35.300 } 00:26:35.300 ]' 00:26:35.300 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:35.558 06:13:58 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:35.816 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e11ff89-2273-4dfe-b014-0d37b3f368b7 00:26:36.073 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:36.073 { 00:26:36.073 "name": "7e11ff89-2273-4dfe-b014-0d37b3f368b7", 00:26:36.073 "aliases": [ 00:26:36.073 "lvs/nvme0n1p0" 00:26:36.073 ], 00:26:36.073 "product_name": "Logical Volume", 00:26:36.073 "block_size": 4096, 00:26:36.073 "num_blocks": 26476544, 00:26:36.073 "uuid": "7e11ff89-2273-4dfe-b014-0d37b3f368b7", 00:26:36.073 "assigned_rate_limits": { 00:26:36.073 "rw_ios_per_sec": 0, 00:26:36.073 "rw_mbytes_per_sec": 0, 00:26:36.073 "r_mbytes_per_sec": 0, 00:26:36.073 "w_mbytes_per_sec": 0 00:26:36.073 }, 00:26:36.073 "claimed": false, 00:26:36.073 "zoned": false, 00:26:36.073 "supported_io_types": { 00:26:36.073 "read": true, 00:26:36.073 "write": true, 00:26:36.073 "unmap": true, 00:26:36.073 "flush": false, 00:26:36.073 "reset": true, 00:26:36.073 "nvme_admin": false, 00:26:36.073 "nvme_io": false, 00:26:36.073 "nvme_io_md": false, 00:26:36.073 "write_zeroes": true, 00:26:36.073 "zcopy": false, 00:26:36.073 "get_zone_info": false, 00:26:36.073 "zone_management": false, 00:26:36.073 "zone_append": false, 00:26:36.073 "compare": false, 00:26:36.073 "compare_and_write": false, 00:26:36.073 "abort": false, 00:26:36.073 "seek_hole": true, 00:26:36.073 "seek_data": true, 00:26:36.073 "copy": false, 00:26:36.073 "nvme_iov_md": false 00:26:36.073 }, 00:26:36.073 "driver_specific": { 00:26:36.073 "lvol": { 00:26:36.073 "lvol_store_uuid": "a7cb78e1-1bba-4634-824c-aa84ab9fe9a9", 00:26:36.073 "base_bdev": "nvme0n1", 00:26:36.073 "thin_provision": true, 00:26:36.073 "num_allocated_clusters": 0, 00:26:36.073 "snapshot": false, 00:26:36.073 "clone": false, 00:26:36.073 "esnap_clone": false 00:26:36.073 } 00:26:36.073 } 00:26:36.073 } 00:26:36.073 ]' 00:26:36.073 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:36.073 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:36.073 06:13:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7e11ff89-2273-4dfe-b014-0d37b3f368b7 --l2p_dram_limit 10' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:36.073 06:13:59 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7e11ff89-2273-4dfe-b014-0d37b3f368b7 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:36.331 [2024-12-08 06:13:59.265550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.265815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:36.331 [2024-12-08 06:13:59.265848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:36.331 [2024-12-08 06:13:59.265864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.265958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.265982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:36.331 [2024-12-08 06:13:59.265995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:36.331 [2024-12-08 06:13:59.266012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.266045] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:36.331 [2024-12-08 06:13:59.266458] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:36.331 [2024-12-08 06:13:59.266491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.266507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:36.331 [2024-12-08 06:13:59.266529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:26:36.331 [2024-12-08 06:13:59.266543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.266756] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9b4d07a5-2847-4a96-b9c4-06f714408bf5 00:26:36.331 [2024-12-08 06:13:59.267839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.267870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:36.331 [2024-12-08 06:13:59.267886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:36.331 [2024-12-08 06:13:59.267898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.272454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.272700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:36.331 [2024-12-08 06:13:59.272736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.490 ms 00:26:36.331 [2024-12-08 06:13:59.272750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.272862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.272880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:36.331 [2024-12-08 06:13:59.272903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:36.331 [2024-12-08 06:13:59.272918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.273025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.273044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:36.331 [2024-12-08 06:13:59.273064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:36.331 [2024-12-08 06:13:59.273076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.273111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:36.331 [2024-12-08 06:13:59.274703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.274741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:36.331 [2024-12-08 06:13:59.274776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:26:36.331 [2024-12-08 06:13:59.274789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.274844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.274861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:36.331 [2024-12-08 06:13:59.274873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:36.331 [2024-12-08 06:13:59.274888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.274937] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:36.331 [2024-12-08 06:13:59.275089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:36.331 [2024-12-08 06:13:59.275107] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:36.331 [2024-12-08 06:13:59.275124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:36.331 [2024-12-08 06:13:59.275138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275153] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275165] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:36.331 [2024-12-08 06:13:59.275183] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:36.331 [2024-12-08 06:13:59.275195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:36.331 [2024-12-08 06:13:59.275206] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:36.331 [2024-12-08 06:13:59.275255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.275269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:36.331 [2024-12-08 06:13:59.275281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:26:36.331 [2024-12-08 06:13:59.275295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.275387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.275416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:36.331 [2024-12-08 06:13:59.275439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:36.331 [2024-12-08 06:13:59.275479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.275594] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:36.331 [2024-12-08 06:13:59.275619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:36.331 [2024-12-08 06:13:59.275631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:36.331 [2024-12-08 06:13:59.275690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:36.331 [2024-12-08 06:13:59.275725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:36.331 [2024-12-08 06:13:59.275748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:36.331 [2024-12-08 06:13:59.275762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:36.331 [2024-12-08 06:13:59.275788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:36.331 [2024-12-08 06:13:59.275802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:36.331 [2024-12-08 06:13:59.275812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:36.331 [2024-12-08 06:13:59.275839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:36.331 [2024-12-08 06:13:59.275860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:36.331 [2024-12-08 06:13:59.275896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:36.331 [2024-12-08 06:13:59.275929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:36.331 [2024-12-08 06:13:59.275960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:36.331 [2024-12-08 06:13:59.275971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.331 [2024-12-08 06:13:59.275981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:36.331 [2024-12-08 06:13:59.275995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:36.331 [2024-12-08 06:13:59.276005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.331 [2024-12-08 06:13:59.276017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:36.331 [2024-12-08 06:13:59.276026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:36.331 [2024-12-08 06:13:59.276038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:36.331 [2024-12-08 06:13:59.276047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:36.331 [2024-12-08 06:13:59.276059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:36.331 [2024-12-08 06:13:59.276068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:36.331 [2024-12-08 06:13:59.276080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:36.331 [2024-12-08 06:13:59.276089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:36.331 [2024-12-08 06:13:59.276101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.276110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:36.331 [2024-12-08 06:13:59.276122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:36.331 [2024-12-08 06:13:59.276131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.276142] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:36.331 [2024-12-08 06:13:59.276163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:36.331 [2024-12-08 06:13:59.276178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:36.331 [2024-12-08 06:13:59.276188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.331 [2024-12-08 06:13:59.276201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:36.331 [2024-12-08 06:13:59.276211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:36.331 [2024-12-08 06:13:59.276222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:36.331 [2024-12-08 06:13:59.276625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:36.331 [2024-12-08 06:13:59.276677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:36.331 [2024-12-08 06:13:59.276720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:36.331 [2024-12-08 06:13:59.276849] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:36.331 [2024-12-08 06:13:59.276922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.277056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:36.331 [2024-12-08 06:13:59.277125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:36.331 [2024-12-08 06:13:59.277267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:36.331 [2024-12-08 06:13:59.277337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:36.331 [2024-12-08 06:13:59.277502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:36.331 [2024-12-08 06:13:59.277561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:36.331 [2024-12-08 06:13:59.277688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:36.331 [2024-12-08 06:13:59.277868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:36.331 [2024-12-08 06:13:59.277935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:36.331 [2024-12-08 06:13:59.278103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.278243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.278312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.278421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.278440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:36.331 [2024-12-08 06:13:59.278454] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:36.331 [2024-12-08 06:13:59.278468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.278485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:36.331 [2024-12-08 06:13:59.278497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:36.331 [2024-12-08 06:13:59.278511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:36.331 [2024-12-08 06:13:59.278522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:36.331 [2024-12-08 06:13:59.278538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.331 [2024-12-08 06:13:59.278550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:36.331 [2024-12-08 06:13:59.278570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:26:36.331 [2024-12-08 06:13:59.278581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.331 [2024-12-08 06:13:59.278671] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:36.331 [2024-12-08 06:13:59.278706] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:38.874 [2024-12-08 06:14:01.395834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.874 [2024-12-08 06:14:01.396119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:38.874 [2024-12-08 06:14:01.396305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2117.169 ms 00:26:38.875 [2024-12-08 06:14:01.396428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.404184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.404522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.875 [2024-12-08 06:14:01.404696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.587 ms 00:26:38.875 [2024-12-08 06:14:01.404748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.405021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.405164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.875 [2024-12-08 06:14:01.405313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:38.875 [2024-12-08 06:14:01.405369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.413646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.413894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.875 [2024-12-08 06:14:01.414036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.094 ms 00:26:38.875 [2024-12-08 06:14:01.414214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.414324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.414407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.875 [2024-12-08 06:14:01.414579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:38.875 [2024-12-08 06:14:01.414630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.415151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.415313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.875 [2024-12-08 06:14:01.415448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:26:38.875 [2024-12-08 06:14:01.415570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.415780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.415840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.875 [2024-12-08 06:14:01.416050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:26:38.875 [2024-12-08 06:14:01.416124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.429697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.430011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.875 [2024-12-08 06:14:01.430159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.367 ms 00:26:38.875 [2024-12-08 06:14:01.430183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.439098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.875 [2024-12-08 06:14:01.442064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.442414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.875 [2024-12-08 06:14:01.442537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.659 ms 00:26:38.875 [2024-12-08 06:14:01.442647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.493242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.493617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:38.875 [2024-12-08 06:14:01.493763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.409 ms 00:26:38.875 [2024-12-08 06:14:01.493916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.494185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.494383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.875 [2024-12-08 06:14:01.494517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:26:38.875 [2024-12-08 06:14:01.494585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.498245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.498450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:38.875 [2024-12-08 06:14:01.498578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:26:38.875 [2024-12-08 06:14:01.498695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.501737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.501922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:38.875 [2024-12-08 06:14:01.501948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:26:38.875 [2024-12-08 06:14:01.501962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.502405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.502432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.875 [2024-12-08 06:14:01.502445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:26:38.875 [2024-12-08 06:14:01.502475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.527963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.528053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:38.875 [2024-12-08 06:14:01.528088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.459 ms 00:26:38.875 [2024-12-08 06:14:01.528101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.532830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.532923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:38.875 [2024-12-08 06:14:01.532942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:26:38.875 [2024-12-08 06:14:01.532954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.537147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.537422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:38.875 [2024-12-08 06:14:01.537453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:26:38.875 [2024-12-08 06:14:01.537469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.541820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.541881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.875 [2024-12-08 06:14:01.541899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:26:38.875 [2024-12-08 06:14:01.541915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.541979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.541998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.875 [2024-12-08 06:14:01.542010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:38.875 [2024-12-08 06:14:01.542022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.542097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.875 [2024-12-08 06:14:01.542114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.875 [2024-12-08 06:14:01.542141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:38.875 [2024-12-08 06:14:01.542169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.875 [2024-12-08 06:14:01.543415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2277.268 ms, result 0 00:26:38.875 { 00:26:38.875 "name": "ftl0", 00:26:38.875 "uuid": "9b4d07a5-2847-4a96-b9c4-06f714408bf5" 00:26:38.875 } 00:26:38.875 06:14:01 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:38.875 06:14:01 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:38.875 06:14:01 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:38.875 06:14:01 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:39.444 [2024-12-08 06:14:02.184667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.184936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:39.444 [2024-12-08 06:14:02.184972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:39.444 [2024-12-08 06:14:02.184987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.185039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:39.444 [2024-12-08 06:14:02.185574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.185637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:39.444 [2024-12-08 06:14:02.185651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:26:39.444 [2024-12-08 06:14:02.185667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.185961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.185998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:39.444 [2024-12-08 06:14:02.186021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:26:39.444 [2024-12-08 06:14:02.186063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.189648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.189682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:39.444 [2024-12-08 06:14:02.189710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.563 ms 00:26:39.444 [2024-12-08 06:14:02.189722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.196649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.196696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:39.444 [2024-12-08 06:14:02.196709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:26:39.444 [2024-12-08 06:14:02.196721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.198156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.198219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:39.444 [2024-12-08 06:14:02.198236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:26:39.444 [2024-12-08 06:14:02.198249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.202441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.202496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:39.444 [2024-12-08 06:14:02.202527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.103 ms 00:26:39.444 [2024-12-08 06:14:02.202555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.202730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.202752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:39.444 [2024-12-08 06:14:02.202765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:26:39.444 [2024-12-08 06:14:02.202776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.204397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.204441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:39.444 [2024-12-08 06:14:02.204456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:26:39.444 [2024-12-08 06:14:02.204470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.205999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.206066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:39.444 [2024-12-08 06:14:02.206080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:26:39.444 [2024-12-08 06:14:02.206091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.207269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.207310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:39.444 [2024-12-08 06:14:02.207325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:26:39.444 [2024-12-08 06:14:02.207338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.208604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.444 [2024-12-08 06:14:02.208671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:39.444 [2024-12-08 06:14:02.208695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:26:39.444 [2024-12-08 06:14:02.208709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.444 [2024-12-08 06:14:02.208809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:39.444 [2024-12-08 06:14:02.208836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:39.444 [2024-12-08 06:14:02.208965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.208977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.208987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.208999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.209989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:39.445 [2024-12-08 06:14:02.210139] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:39.446 [2024-12-08 06:14:02.210166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b4d07a5-2847-4a96-b9c4-06f714408bf5 00:26:39.446 [2024-12-08 06:14:02.210182] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:39.446 [2024-12-08 06:14:02.210193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:39.446 [2024-12-08 06:14:02.210206] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:39.446 [2024-12-08 06:14:02.210217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:39.446 [2024-12-08 06:14:02.210230] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:39.446 [2024-12-08 06:14:02.210241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:39.446 [2024-12-08 06:14:02.210264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:39.446 [2024-12-08 06:14:02.210276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:39.446 [2024-12-08 06:14:02.210288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:39.446 [2024-12-08 06:14:02.210300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.446 [2024-12-08 06:14:02.210313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:39.446 [2024-12-08 06:14:02.210328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:26:39.446 [2024-12-08 06:14:02.210341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.211933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.446 [2024-12-08 06:14:02.211973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:39.446 [2024-12-08 06:14:02.211988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:26:39.446 [2024-12-08 06:14:02.212001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.212094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.446 [2024-12-08 06:14:02.212115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:39.446 [2024-12-08 06:14:02.212142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:39.446 [2024-12-08 06:14:02.212155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.217656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.217711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:39.446 [2024-12-08 06:14:02.217734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.217747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.217807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.217826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:39.446 [2024-12-08 06:14:02.217837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.217849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.217956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.217982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:39.446 [2024-12-08 06:14:02.217994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.218005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.218028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.218043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:39.446 [2024-12-08 06:14:02.218056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.218068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.226855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.226948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:39.446 [2024-12-08 06:14:02.226964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.226976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.234740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.234828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:39.446 [2024-12-08 06:14:02.234844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.234859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.234937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.234959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.446 [2024-12-08 06:14:02.234970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.234982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.235063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.235082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.446 [2024-12-08 06:14:02.235094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.235109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.235236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.235275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.446 [2024-12-08 06:14:02.235288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.235312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.235365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.235387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:39.446 [2024-12-08 06:14:02.235399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.235412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.235474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.235497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.446 [2024-12-08 06:14:02.235510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.235522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.235577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.446 [2024-12-08 06:14:02.235598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.446 [2024-12-08 06:14:02.235611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.446 [2024-12-08 06:14:02.235627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.446 [2024-12-08 06:14:02.235802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.094 ms, result 0 00:26:39.446 true 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92461 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92461 ']' 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92461 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92461 00:26:39.446 killing process with pid 92461 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92461' 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92461 00:26:39.446 06:14:02 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92461 00:26:42.746 06:14:05 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:46.933 262144+0 records in 00:26:46.933 262144+0 records out 00:26:46.933 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.21289 s, 255 MB/s 00:26:46.933 06:14:09 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:48.309 06:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:48.568 [2024-12-08 06:14:11.433021] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:48.568 [2024-12-08 06:14:11.433236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92662 ] 00:26:48.568 [2024-12-08 06:14:11.585927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.828 [2024-12-08 06:14:11.629416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.828 [2024-12-08 06:14:11.720818] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:48.828 [2024-12-08 06:14:11.720917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:49.091 [2024-12-08 06:14:11.878984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.879052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:49.091 [2024-12-08 06:14:11.879084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:49.091 [2024-12-08 06:14:11.879101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.879235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.879270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.091 [2024-12-08 06:14:11.879294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:49.091 [2024-12-08 06:14:11.879370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.879466] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:49.091 [2024-12-08 06:14:11.879880] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:49.091 [2024-12-08 06:14:11.879944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.879979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.091 [2024-12-08 06:14:11.880005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:26:49.091 [2024-12-08 06:14:11.880024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.881263] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:49.091 [2024-12-08 06:14:11.883388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.883455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:49.091 [2024-12-08 06:14:11.883483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:26:49.091 [2024-12-08 06:14:11.883504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.883601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.883649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:49.091 [2024-12-08 06:14:11.883674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:49.091 [2024-12-08 06:14:11.883700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.887910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.887971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.091 [2024-12-08 06:14:11.887993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.049 ms 00:26:49.091 [2024-12-08 06:14:11.888011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.888146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.888173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.091 [2024-12-08 06:14:11.888244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:26:49.091 [2024-12-08 06:14:11.888263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.888355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.888400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:49.091 [2024-12-08 06:14:11.888423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:49.091 [2024-12-08 06:14:11.888453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.888506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:49.091 [2024-12-08 06:14:11.889884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.889936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:49.091 [2024-12-08 06:14:11.889960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:26:49.091 [2024-12-08 06:14:11.889980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.890064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.091 [2024-12-08 06:14:11.890087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:49.091 [2024-12-08 06:14:11.890107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:49.091 [2024-12-08 06:14:11.890124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.091 [2024-12-08 06:14:11.890179] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:49.091 [2024-12-08 06:14:11.890241] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:49.092 [2024-12-08 06:14:11.890314] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:49.092 [2024-12-08 06:14:11.890352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:49.092 [2024-12-08 06:14:11.890488] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:49.092 [2024-12-08 06:14:11.890515] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:49.092 [2024-12-08 06:14:11.890538] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:49.092 [2024-12-08 06:14:11.890562] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:49.092 [2024-12-08 06:14:11.890601] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:49.092 [2024-12-08 06:14:11.890631] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:49.092 [2024-12-08 06:14:11.890661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:49.092 [2024-12-08 06:14:11.890678] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:49.092 [2024-12-08 06:14:11.890696] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:49.092 [2024-12-08 06:14:11.890716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.092 [2024-12-08 06:14:11.890733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:49.092 [2024-12-08 06:14:11.890753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:26:49.092 [2024-12-08 06:14:11.890771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.092 [2024-12-08 06:14:11.890892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.092 [2024-12-08 06:14:11.890937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:49.092 [2024-12-08 06:14:11.890963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:49.092 [2024-12-08 06:14:11.890982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.092 [2024-12-08 06:14:11.891147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:49.092 [2024-12-08 06:14:11.891208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:49.092 [2024-12-08 06:14:11.891234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:49.092 [2024-12-08 06:14:11.891307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:49.092 [2024-12-08 06:14:11.891364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:49.092 [2024-12-08 06:14:11.891401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:49.092 [2024-12-08 06:14:11.891418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:49.092 [2024-12-08 06:14:11.891465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:49.092 [2024-12-08 06:14:11.891493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:49.092 [2024-12-08 06:14:11.891514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:49.092 [2024-12-08 06:14:11.891534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:49.092 [2024-12-08 06:14:11.891570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:49.092 [2024-12-08 06:14:11.891628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:49.092 [2024-12-08 06:14:11.891683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:49.092 [2024-12-08 06:14:11.891768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:49.092 [2024-12-08 06:14:11.891830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:49.092 [2024-12-08 06:14:11.891869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:49.092 [2024-12-08 06:14:11.891887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:49.092 [2024-12-08 06:14:11.891903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:49.092 [2024-12-08 06:14:11.891921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:49.092 [2024-12-08 06:14:11.891940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:49.092 [2024-12-08 06:14:11.891957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:49.092 [2024-12-08 06:14:11.891973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:49.092 [2024-12-08 06:14:11.891992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:49.092 [2024-12-08 06:14:11.892009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.892027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:49.092 [2024-12-08 06:14:11.892044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:49.092 [2024-12-08 06:14:11.892062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.892080] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:49.092 [2024-12-08 06:14:11.892097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:49.092 [2024-12-08 06:14:11.892120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:49.092 [2024-12-08 06:14:11.892144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:49.092 [2024-12-08 06:14:11.892162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:49.092 [2024-12-08 06:14:11.892181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:49.092 [2024-12-08 06:14:11.892198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:49.092 [2024-12-08 06:14:11.892214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:49.092 [2024-12-08 06:14:11.892247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:49.092 [2024-12-08 06:14:11.892267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:49.092 [2024-12-08 06:14:11.892286] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:49.092 [2024-12-08 06:14:11.892320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:49.092 [2024-12-08 06:14:11.892358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:49.092 [2024-12-08 06:14:11.892378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:49.092 [2024-12-08 06:14:11.892398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:49.092 [2024-12-08 06:14:11.892416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:49.092 [2024-12-08 06:14:11.892434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:49.092 [2024-12-08 06:14:11.892458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:49.092 [2024-12-08 06:14:11.892484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:49.092 [2024-12-08 06:14:11.892503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:49.092 [2024-12-08 06:14:11.892535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:49.092 [2024-12-08 06:14:11.892631] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:49.092 [2024-12-08 06:14:11.892651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:49.092 [2024-12-08 06:14:11.892690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:49.092 [2024-12-08 06:14:11.892709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:49.092 [2024-12-08 06:14:11.892727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:49.092 [2024-12-08 06:14:11.892749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.092 [2024-12-08 06:14:11.892768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:49.092 [2024-12-08 06:14:11.892792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:26:49.093 [2024-12-08 06:14:11.892812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.906788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.906855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.093 [2024-12-08 06:14:11.906880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.867 ms 00:26:49.093 [2024-12-08 06:14:11.906911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.907035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.907061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:49.093 [2024-12-08 06:14:11.907109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:49.093 [2024-12-08 06:14:11.907158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.914688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.914745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.093 [2024-12-08 06:14:11.914781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.374 ms 00:26:49.093 [2024-12-08 06:14:11.914800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.914853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.914876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.093 [2024-12-08 06:14:11.914895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:49.093 [2024-12-08 06:14:11.914913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.915416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.915489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.093 [2024-12-08 06:14:11.915517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:26:49.093 [2024-12-08 06:14:11.915539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.915781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.915817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.093 [2024-12-08 06:14:11.915855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:26:49.093 [2024-12-08 06:14:11.915876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.920774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.920838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.093 [2024-12-08 06:14:11.920871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.826 ms 00:26:49.093 [2024-12-08 06:14:11.920891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.923668] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:49.093 [2024-12-08 06:14:11.923716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:49.093 [2024-12-08 06:14:11.923745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.923804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:49.093 [2024-12-08 06:14:11.923840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:26:49.093 [2024-12-08 06:14:11.923872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.937700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.937755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:49.093 [2024-12-08 06:14:11.937792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.684 ms 00:26:49.093 [2024-12-08 06:14:11.937821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.939610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.939652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:49.093 [2024-12-08 06:14:11.939676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:26:49.093 [2024-12-08 06:14:11.939695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.941425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.941477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:49.093 [2024-12-08 06:14:11.941499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:26:49.093 [2024-12-08 06:14:11.941516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.941962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.942008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:49.093 [2024-12-08 06:14:11.942033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:26:49.093 [2024-12-08 06:14:11.942064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.957106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.957235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:49.093 [2024-12-08 06:14:11.957275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.965 ms 00:26:49.093 [2024-12-08 06:14:11.957299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.968527] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:49.093 [2024-12-08 06:14:11.971821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.971917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:49.093 [2024-12-08 06:14:11.971947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.447 ms 00:26:49.093 [2024-12-08 06:14:11.971970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.972096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.972126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:49.093 [2024-12-08 06:14:11.972170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:49.093 [2024-12-08 06:14:11.972191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.972427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.972471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:49.093 [2024-12-08 06:14:11.972497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:49.093 [2024-12-08 06:14:11.972530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.972595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.972632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:49.093 [2024-12-08 06:14:11.972655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:49.093 [2024-12-08 06:14:11.972674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.972740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:49.093 [2024-12-08 06:14:11.972768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.972787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:49.093 [2024-12-08 06:14:11.972819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:49.093 [2024-12-08 06:14:11.972845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.976763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.976830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:49.093 [2024-12-08 06:14:11.976859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.871 ms 00:26:49.093 [2024-12-08 06:14:11.976883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.977000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.093 [2024-12-08 06:14:11.977030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:49.093 [2024-12-08 06:14:11.977060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:49.093 [2024-12-08 06:14:11.977080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.093 [2024-12-08 06:14:11.978731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.993 ms, result 0 00:26:50.032  [2024-12-08T06:14:14.016Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-08T06:14:15.395Z] Copying: 48/1024 [MB] (23 MBps) [2024-12-08T06:14:16.331Z] Copying: 71/1024 [MB] (23 MBps) [2024-12-08T06:14:17.263Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-08T06:14:18.197Z] Copying: 119/1024 [MB] (24 MBps) [2024-12-08T06:14:19.131Z] Copying: 143/1024 [MB] (23 MBps) [2024-12-08T06:14:20.063Z] Copying: 167/1024 [MB] (23 MBps) [2024-12-08T06:14:21.006Z] Copying: 192/1024 [MB] (25 MBps) [2024-12-08T06:14:22.402Z] Copying: 216/1024 [MB] (24 MBps) [2024-12-08T06:14:23.340Z] Copying: 240/1024 [MB] (24 MBps) [2024-12-08T06:14:24.279Z] Copying: 264/1024 [MB] (23 MBps) [2024-12-08T06:14:25.224Z] Copying: 289/1024 [MB] (24 MBps) [2024-12-08T06:14:26.159Z] Copying: 313/1024 [MB] (24 MBps) [2024-12-08T06:14:27.092Z] Copying: 337/1024 [MB] (24 MBps) [2024-12-08T06:14:28.026Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-08T06:14:29.406Z] Copying: 384/1024 [MB] (23 MBps) [2024-12-08T06:14:30.343Z] Copying: 407/1024 [MB] (23 MBps) [2024-12-08T06:14:31.278Z] Copying: 431/1024 [MB] (23 MBps) [2024-12-08T06:14:32.213Z] Copying: 455/1024 [MB] (23 MBps) [2024-12-08T06:14:33.150Z] Copying: 479/1024 [MB] (24 MBps) [2024-12-08T06:14:34.090Z] Copying: 503/1024 [MB] (23 MBps) [2024-12-08T06:14:35.060Z] Copying: 527/1024 [MB] (23 MBps) [2024-12-08T06:14:36.016Z] Copying: 550/1024 [MB] (23 MBps) [2024-12-08T06:14:37.390Z] Copying: 575/1024 [MB] (24 MBps) [2024-12-08T06:14:38.325Z] Copying: 599/1024 [MB] (23 MBps) [2024-12-08T06:14:39.260Z] Copying: 623/1024 [MB] (24 MBps) [2024-12-08T06:14:40.212Z] Copying: 647/1024 [MB] (23 MBps) [2024-12-08T06:14:41.149Z] Copying: 671/1024 [MB] (24 MBps) [2024-12-08T06:14:42.086Z] Copying: 695/1024 [MB] (23 MBps) [2024-12-08T06:14:43.022Z] Copying: 719/1024 [MB] (24 MBps) [2024-12-08T06:14:44.399Z] Copying: 743/1024 [MB] (23 MBps) [2024-12-08T06:14:45.335Z] Copying: 767/1024 [MB] (24 MBps) [2024-12-08T06:14:46.267Z] Copying: 792/1024 [MB] (24 MBps) [2024-12-08T06:14:47.199Z] Copying: 815/1024 [MB] (23 MBps) [2024-12-08T06:14:48.134Z] Copying: 840/1024 [MB] (24 MBps) [2024-12-08T06:14:49.070Z] Copying: 864/1024 [MB] (24 MBps) [2024-12-08T06:14:50.072Z] Copying: 888/1024 [MB] (23 MBps) [2024-12-08T06:14:51.010Z] Copying: 912/1024 [MB] (24 MBps) [2024-12-08T06:14:52.388Z] Copying: 936/1024 [MB] (24 MBps) [2024-12-08T06:14:53.325Z] Copying: 960/1024 [MB] (23 MBps) [2024-12-08T06:14:54.262Z] Copying: 983/1024 [MB] (23 MBps) [2024-12-08T06:14:54.831Z] Copying: 1007/1024 [MB] (23 MBps) [2024-12-08T06:14:54.831Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:14:54.688896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.786 [2024-12-08 06:14:54.688952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:31.786 [2024-12-08 06:14:54.689001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:31.786 [2024-12-08 06:14:54.689012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.786 [2024-12-08 06:14:54.689039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:31.786 [2024-12-08 06:14:54.689503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.786 [2024-12-08 06:14:54.689528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:31.786 [2024-12-08 06:14:54.689540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:27:31.786 [2024-12-08 06:14:54.689551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.786 [2024-12-08 06:14:54.691288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.786 [2024-12-08 06:14:54.691371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:31.786 [2024-12-08 06:14:54.691395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:27:31.786 [2024-12-08 06:14:54.691406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.786 [2024-12-08 06:14:54.691468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.786 [2024-12-08 06:14:54.691489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:31.786 [2024-12-08 06:14:54.691503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.786 [2024-12-08 06:14:54.691514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.786 [2024-12-08 06:14:54.691568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.786 [2024-12-08 06:14:54.691592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:31.786 [2024-12-08 06:14:54.691605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:31.786 [2024-12-08 06:14:54.691617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.786 [2024-12-08 06:14:54.691636] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:31.786 [2024-12-08 06:14:54.691653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.691989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:31.787 [2024-12-08 06:14:54.692693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:31.788 [2024-12-08 06:14:54.692835] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:31.788 [2024-12-08 06:14:54.692852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b4d07a5-2847-4a96-b9c4-06f714408bf5 00:27:31.788 [2024-12-08 06:14:54.692874] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:31.788 [2024-12-08 06:14:54.692885] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:31.788 [2024-12-08 06:14:54.692902] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:31.788 [2024-12-08 06:14:54.692912] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:31.788 [2024-12-08 06:14:54.692922] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:31.788 [2024-12-08 06:14:54.692932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:31.788 [2024-12-08 06:14:54.692942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:31.788 [2024-12-08 06:14:54.692952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:31.788 [2024-12-08 06:14:54.692961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:31.788 [2024-12-08 06:14:54.692972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.788 [2024-12-08 06:14:54.692982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:31.788 [2024-12-08 06:14:54.692993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:27:31.788 [2024-12-08 06:14:54.693014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.694216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.788 [2024-12-08 06:14:54.694289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:31.788 [2024-12-08 06:14:54.694303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:27:31.788 [2024-12-08 06:14:54.694314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.694393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.788 [2024-12-08 06:14:54.694407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:31.788 [2024-12-08 06:14:54.694418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:31.788 [2024-12-08 06:14:54.694432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.698490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.698544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.788 [2024-12-08 06:14:54.698557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.698567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.698616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.698628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.788 [2024-12-08 06:14:54.698638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.698652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.698687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.698703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.788 [2024-12-08 06:14:54.698713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.698722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.698740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.698767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.788 [2024-12-08 06:14:54.698818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.698829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.706329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.706390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.788 [2024-12-08 06:14:54.706421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.706443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.712506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.712565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.788 [2024-12-08 06:14:54.712596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.712606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.712668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.712683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.788 [2024-12-08 06:14:54.712693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.712703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.712728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.712739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.788 [2024-12-08 06:14:54.712760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.712770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.712879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.712902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.788 [2024-12-08 06:14:54.712913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.712923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.712967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.712983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:31.788 [2024-12-08 06:14:54.712994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.713005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.713044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.713064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.788 [2024-12-08 06:14:54.713075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.713085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.713132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.788 [2024-12-08 06:14:54.713147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.788 [2024-12-08 06:14:54.713158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.788 [2024-12-08 06:14:54.713168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.788 [2024-12-08 06:14:54.713315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 24.363 ms, result 0 00:27:32.356 00:27:32.356 00:27:32.613 06:14:55 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:32.613 [2024-12-08 06:14:55.494884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:32.614 [2024-12-08 06:14:55.495077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93092 ] 00:27:32.614 [2024-12-08 06:14:55.642862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.871 [2024-12-08 06:14:55.679507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.871 [2024-12-08 06:14:55.762990] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.871 [2024-12-08 06:14:55.763113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:33.131 [2024-12-08 06:14:55.919444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.919499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:33.131 [2024-12-08 06:14:55.919523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:33.131 [2024-12-08 06:14:55.919535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.919606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.919627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.131 [2024-12-08 06:14:55.919640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:33.131 [2024-12-08 06:14:55.919662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.919693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:33.131 [2024-12-08 06:14:55.919985] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:33.131 [2024-12-08 06:14:55.920011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.920023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.131 [2024-12-08 06:14:55.920038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:27:33.131 [2024-12-08 06:14:55.920050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.920554] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:33.131 [2024-12-08 06:14:55.920591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.920603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:33.131 [2024-12-08 06:14:55.920616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:33.131 [2024-12-08 06:14:55.920627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.920680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.920699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:33.131 [2024-12-08 06:14:55.920714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:33.131 [2024-12-08 06:14:55.920725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.921102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.921121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.131 [2024-12-08 06:14:55.921143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:27:33.131 [2024-12-08 06:14:55.921154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.921268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.921292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.131 [2024-12-08 06:14:55.921304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:33.131 [2024-12-08 06:14:55.921314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.921347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.921362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:33.131 [2024-12-08 06:14:55.921374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:33.131 [2024-12-08 06:14:55.921395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.921426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.131 [2024-12-08 06:14:55.922894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.922940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.131 [2024-12-08 06:14:55.922972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:27:33.131 [2024-12-08 06:14:55.922983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.923018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.923041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:33.131 [2024-12-08 06:14:55.923052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:33.131 [2024-12-08 06:14:55.923062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.923138] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:33.131 [2024-12-08 06:14:55.923171] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:33.131 [2024-12-08 06:14:55.923258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:33.131 [2024-12-08 06:14:55.923290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:33.131 [2024-12-08 06:14:55.923403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:33.131 [2024-12-08 06:14:55.923419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:33.131 [2024-12-08 06:14:55.923458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:33.131 [2024-12-08 06:14:55.923473] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:33.131 [2024-12-08 06:14:55.923486] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:33.131 [2024-12-08 06:14:55.923497] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:33.131 [2024-12-08 06:14:55.923515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:33.131 [2024-12-08 06:14:55.923526] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:33.131 [2024-12-08 06:14:55.923536] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:33.131 [2024-12-08 06:14:55.923549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.923559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:33.131 [2024-12-08 06:14:55.923571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:27:33.131 [2024-12-08 06:14:55.923581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.923685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.131 [2024-12-08 06:14:55.923699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:33.131 [2024-12-08 06:14:55.923711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:33.131 [2024-12-08 06:14:55.923732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.131 [2024-12-08 06:14:55.923888] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:33.131 [2024-12-08 06:14:55.923905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:33.131 [2024-12-08 06:14:55.923917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.131 [2024-12-08 06:14:55.923927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.923941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:33.131 [2024-12-08 06:14:55.923951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.923961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:33.131 [2024-12-08 06:14:55.923986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:33.131 [2024-12-08 06:14:55.923995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.131 [2024-12-08 06:14:55.924015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:33.131 [2024-12-08 06:14:55.924024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:33.131 [2024-12-08 06:14:55.924033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.131 [2024-12-08 06:14:55.924043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:33.131 [2024-12-08 06:14:55.924052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:33.131 [2024-12-08 06:14:55.924062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:33.131 [2024-12-08 06:14:55.924081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:33.131 [2024-12-08 06:14:55.924092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:33.131 [2024-12-08 06:14:55.924114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.131 [2024-12-08 06:14:55.924149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:33.131 [2024-12-08 06:14:55.924159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.131 [2024-12-08 06:14:55.924178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:33.131 [2024-12-08 06:14:55.924187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.131 [2024-12-08 06:14:55.924206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:33.131 [2024-12-08 06:14:55.924216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.131 [2024-12-08 06:14:55.924235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:33.131 [2024-12-08 06:14:55.924278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.131 [2024-12-08 06:14:55.924298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:33.131 [2024-12-08 06:14:55.924308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:33.131 [2024-12-08 06:14:55.924324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.131 [2024-12-08 06:14:55.924335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:33.131 [2024-12-08 06:14:55.924361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:33.131 [2024-12-08 06:14:55.924370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:33.131 [2024-12-08 06:14:55.924389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:33.131 [2024-12-08 06:14:55.924398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924408] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:33.131 [2024-12-08 06:14:55.924418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:33.131 [2024-12-08 06:14:55.924427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.131 [2024-12-08 06:14:55.924453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.131 [2024-12-08 06:14:55.924473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:33.131 [2024-12-08 06:14:55.924498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:33.131 [2024-12-08 06:14:55.924507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:33.132 [2024-12-08 06:14:55.924520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:33.132 [2024-12-08 06:14:55.924529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:33.132 [2024-12-08 06:14:55.924542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:33.132 [2024-12-08 06:14:55.924554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:33.132 [2024-12-08 06:14:55.924576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:33.132 [2024-12-08 06:14:55.924598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:33.132 [2024-12-08 06:14:55.924609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:33.132 [2024-12-08 06:14:55.924619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:33.132 [2024-12-08 06:14:55.924629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:33.132 [2024-12-08 06:14:55.924639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:33.132 [2024-12-08 06:14:55.924649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:33.132 [2024-12-08 06:14:55.924659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:33.132 [2024-12-08 06:14:55.924670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:33.132 [2024-12-08 06:14:55.924680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:33.132 [2024-12-08 06:14:55.924745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:33.132 [2024-12-08 06:14:55.924756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:33.132 [2024-12-08 06:14:55.924778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:33.132 [2024-12-08 06:14:55.924788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:33.132 [2024-12-08 06:14:55.924799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:33.132 [2024-12-08 06:14:55.924810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.924821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:33.132 [2024-12-08 06:14:55.924842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:27:33.132 [2024-12-08 06:14:55.924852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.942006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.942066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.132 [2024-12-08 06:14:55.942097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.088 ms 00:27:33.132 [2024-12-08 06:14:55.942121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.942229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.942246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:33.132 [2024-12-08 06:14:55.942267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:33.132 [2024-12-08 06:14:55.942277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.949830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.949905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.132 [2024-12-08 06:14:55.949922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.463 ms 00:27:33.132 [2024-12-08 06:14:55.949931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.949979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.949997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.132 [2024-12-08 06:14:55.950018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.132 [2024-12-08 06:14:55.950028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.950165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.950184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.132 [2024-12-08 06:14:55.950196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:33.132 [2024-12-08 06:14:55.950210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.950367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.950385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.132 [2024-12-08 06:14:55.950396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:27:33.132 [2024-12-08 06:14:55.950406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.954937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.954986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.132 [2024-12-08 06:14:55.955015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:27:33.132 [2024-12-08 06:14:55.955025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.955164] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:33.132 [2024-12-08 06:14:55.955190] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:33.132 [2024-12-08 06:14:55.955236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.955247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:33.132 [2024-12-08 06:14:55.955258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:33.132 [2024-12-08 06:14:55.955268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.967032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.967077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:33.132 [2024-12-08 06:14:55.967105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.744 ms 00:27:33.132 [2024-12-08 06:14:55.967114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.967229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.967253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:33.132 [2024-12-08 06:14:55.967270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:33.132 [2024-12-08 06:14:55.967279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.967349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.967370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:33.132 [2024-12-08 06:14:55.967389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:33.132 [2024-12-08 06:14:55.967402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.967806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.967823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:33.132 [2024-12-08 06:14:55.967847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:33.132 [2024-12-08 06:14:55.967856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.967878] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:33.132 [2024-12-08 06:14:55.967892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.967901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:33.132 [2024-12-08 06:14:55.967911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:33.132 [2024-12-08 06:14:55.967922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.975627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:33.132 [2024-12-08 06:14:55.975885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.975901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:33.132 [2024-12-08 06:14:55.975912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.941 ms 00:27:33.132 [2024-12-08 06:14:55.975921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.978006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.978050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:33.132 [2024-12-08 06:14:55.978077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:27:33.132 [2024-12-08 06:14:55.978087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.978179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.978204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:33.132 [2024-12-08 06:14:55.978245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:33.132 [2024-12-08 06:14:55.978265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.978324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.978360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.132 [2024-12-08 06:14:55.978370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:33.132 [2024-12-08 06:14:55.978380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.978419] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:33.132 [2024-12-08 06:14:55.978435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.978448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:33.132 [2024-12-08 06:14:55.978459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:33.132 [2024-12-08 06:14:55.978469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.982079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.982132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.132 [2024-12-08 06:14:55.982167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:27:33.132 [2024-12-08 06:14:55.982177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.982289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.132 [2024-12-08 06:14:55.982307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.132 [2024-12-08 06:14:55.982318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:33.132 [2024-12-08 06:14:55.982328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.132 [2024-12-08 06:14:55.983521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 63.461 ms, result 0 00:27:34.509  [2024-12-08T06:14:58.489Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-08T06:14:59.443Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-08T06:15:00.382Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-08T06:15:01.320Z] Copying: 93/1024 [MB] (23 MBps) [2024-12-08T06:15:02.253Z] Copying: 118/1024 [MB] (25 MBps) [2024-12-08T06:15:03.635Z] Copying: 143/1024 [MB] (24 MBps) [2024-12-08T06:15:04.212Z] Copying: 169/1024 [MB] (25 MBps) [2024-12-08T06:15:05.589Z] Copying: 193/1024 [MB] (24 MBps) [2024-12-08T06:15:06.525Z] Copying: 218/1024 [MB] (25 MBps) [2024-12-08T06:15:07.462Z] Copying: 242/1024 [MB] (23 MBps) [2024-12-08T06:15:08.399Z] Copying: 266/1024 [MB] (23 MBps) [2024-12-08T06:15:09.336Z] Copying: 290/1024 [MB] (23 MBps) [2024-12-08T06:15:10.274Z] Copying: 313/1024 [MB] (23 MBps) [2024-12-08T06:15:11.211Z] Copying: 337/1024 [MB] (23 MBps) [2024-12-08T06:15:12.591Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-08T06:15:13.529Z] Copying: 384/1024 [MB] (23 MBps) [2024-12-08T06:15:14.466Z] Copying: 407/1024 [MB] (23 MBps) [2024-12-08T06:15:15.400Z] Copying: 431/1024 [MB] (23 MBps) [2024-12-08T06:15:16.331Z] Copying: 454/1024 [MB] (23 MBps) [2024-12-08T06:15:17.264Z] Copying: 478/1024 [MB] (23 MBps) [2024-12-08T06:15:18.195Z] Copying: 500/1024 [MB] (22 MBps) [2024-12-08T06:15:19.589Z] Copying: 524/1024 [MB] (23 MBps) [2024-12-08T06:15:20.525Z] Copying: 547/1024 [MB] (23 MBps) [2024-12-08T06:15:21.459Z] Copying: 570/1024 [MB] (22 MBps) [2024-12-08T06:15:22.396Z] Copying: 593/1024 [MB] (22 MBps) [2024-12-08T06:15:23.333Z] Copying: 618/1024 [MB] (24 MBps) [2024-12-08T06:15:24.269Z] Copying: 641/1024 [MB] (23 MBps) [2024-12-08T06:15:25.204Z] Copying: 664/1024 [MB] (23 MBps) [2024-12-08T06:15:26.579Z] Copying: 689/1024 [MB] (24 MBps) [2024-12-08T06:15:27.516Z] Copying: 713/1024 [MB] (24 MBps) [2024-12-08T06:15:28.452Z] Copying: 737/1024 [MB] (24 MBps) [2024-12-08T06:15:29.389Z] Copying: 759/1024 [MB] (22 MBps) [2024-12-08T06:15:30.329Z] Copying: 784/1024 [MB] (24 MBps) [2024-12-08T06:15:31.267Z] Copying: 808/1024 [MB] (24 MBps) [2024-12-08T06:15:32.222Z] Copying: 832/1024 [MB] (23 MBps) [2024-12-08T06:15:33.603Z] Copying: 856/1024 [MB] (23 MBps) [2024-12-08T06:15:34.540Z] Copying: 881/1024 [MB] (25 MBps) [2024-12-08T06:15:35.476Z] Copying: 905/1024 [MB] (24 MBps) [2024-12-08T06:15:36.408Z] Copying: 929/1024 [MB] (23 MBps) [2024-12-08T06:15:37.343Z] Copying: 952/1024 [MB] (23 MBps) [2024-12-08T06:15:38.278Z] Copying: 976/1024 [MB] (23 MBps) [2024-12-08T06:15:39.216Z] Copying: 1000/1024 [MB] (23 MBps) [2024-12-08T06:15:39.216Z] Copying: 1023/1024 [MB] (23 MBps) [2024-12-08T06:15:39.476Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-08 06:15:39.376545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.431 [2024-12-08 06:15:39.376642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:16.431 [2024-12-08 06:15:39.376675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:16.431 [2024-12-08 06:15:39.376690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.431 [2024-12-08 06:15:39.376726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:16.431 [2024-12-08 06:15:39.377272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.431 [2024-12-08 06:15:39.377322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:16.431 [2024-12-08 06:15:39.377343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:28:16.431 [2024-12-08 06:15:39.377356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.431 [2024-12-08 06:15:39.377666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.431 [2024-12-08 06:15:39.377696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:16.431 [2024-12-08 06:15:39.377713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:16.431 [2024-12-08 06:15:39.377725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.431 [2024-12-08 06:15:39.377772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.432 [2024-12-08 06:15:39.377788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:16.432 [2024-12-08 06:15:39.377806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:16.432 [2024-12-08 06:15:39.377820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.432 [2024-12-08 06:15:39.377893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.432 [2024-12-08 06:15:39.377910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:16.432 [2024-12-08 06:15:39.377923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:16.432 [2024-12-08 06:15:39.377936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.432 [2024-12-08 06:15:39.377973] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:16.432 [2024-12-08 06:15:39.377993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.378993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:16.432 [2024-12-08 06:15:39.379145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:16.433 [2024-12-08 06:15:39.379493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:16.433 [2024-12-08 06:15:39.379506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b4d07a5-2847-4a96-b9c4-06f714408bf5 00:28:16.433 [2024-12-08 06:15:39.379520] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:16.433 [2024-12-08 06:15:39.379538] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:16.433 [2024-12-08 06:15:39.379551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:16.433 [2024-12-08 06:15:39.379564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:16.433 [2024-12-08 06:15:39.379586] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:16.433 [2024-12-08 06:15:39.379608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:16.433 [2024-12-08 06:15:39.379628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:16.433 [2024-12-08 06:15:39.379640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:16.433 [2024-12-08 06:15:39.379652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:16.433 [2024-12-08 06:15:39.379665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.433 [2024-12-08 06:15:39.379678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:16.433 [2024-12-08 06:15:39.379692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:28:16.433 [2024-12-08 06:15:39.379705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.381284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.433 [2024-12-08 06:15:39.381341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:16.433 [2024-12-08 06:15:39.381358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:28:16.433 [2024-12-08 06:15:39.381371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.381465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.433 [2024-12-08 06:15:39.381482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:16.433 [2024-12-08 06:15:39.381508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:16.433 [2024-12-08 06:15:39.381521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.387298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.387353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.433 [2024-12-08 06:15:39.387371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.387384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.387468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.387488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.433 [2024-12-08 06:15:39.387503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.387516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.387612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.387646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.433 [2024-12-08 06:15:39.387662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.387675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.387701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.387716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.433 [2024-12-08 06:15:39.387730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.387742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.396160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.396252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.433 [2024-12-08 06:15:39.396270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.396281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.403795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.403859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.433 [2024-12-08 06:15:39.403877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.403908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.403965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.403983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:16.433 [2024-12-08 06:15:39.404002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.404011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.404070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.404108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:16.433 [2024-12-08 06:15:39.404142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.404151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.404218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.404257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:16.433 [2024-12-08 06:15:39.404271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.404281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.404316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.404339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:16.433 [2024-12-08 06:15:39.404351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.404360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.404401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.404415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:16.433 [2024-12-08 06:15:39.404431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.404441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.404486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.433 [2024-12-08 06:15:39.404501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:16.433 [2024-12-08 06:15:39.404511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.433 [2024-12-08 06:15:39.404521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.433 [2024-12-08 06:15:39.404682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.079 ms, result 0 00:28:16.693 00:28:16.693 00:28:16.693 06:15:39 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:18.592 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:18.592 06:15:41 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:18.850 [2024-12-08 06:15:41.699310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:18.850 [2024-12-08 06:15:41.699504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93542 ] 00:28:18.850 [2024-12-08 06:15:41.846955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.850 [2024-12-08 06:15:41.892453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.108 [2024-12-08 06:15:41.987084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.108 [2024-12-08 06:15:41.987210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.108 [2024-12-08 06:15:42.146568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.108 [2024-12-08 06:15:42.146622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.108 [2024-12-08 06:15:42.146660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.108 [2024-12-08 06:15:42.146670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.108 [2024-12-08 06:15:42.146729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.108 [2024-12-08 06:15:42.146747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.108 [2024-12-08 06:15:42.146758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:19.108 [2024-12-08 06:15:42.146811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.108 [2024-12-08 06:15:42.146841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.108 [2024-12-08 06:15:42.147129] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.108 [2024-12-08 06:15:42.147153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.108 [2024-12-08 06:15:42.147164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.108 [2024-12-08 06:15:42.147212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:19.108 [2024-12-08 06:15:42.147229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.108 [2024-12-08 06:15:42.147785] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:19.108 [2024-12-08 06:15:42.147813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.108 [2024-12-08 06:15:42.147824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:19.108 [2024-12-08 06:15:42.147836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:19.108 [2024-12-08 06:15:42.147873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.147934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.147959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:19.109 [2024-12-08 06:15:42.147974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:19.109 [2024-12-08 06:15:42.147983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.148431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.148452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.109 [2024-12-08 06:15:42.148465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:28:19.109 [2024-12-08 06:15:42.148476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.148617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.148656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.109 [2024-12-08 06:15:42.148668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:28:19.109 [2024-12-08 06:15:42.148678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.148711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.148727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.109 [2024-12-08 06:15:42.148738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:19.109 [2024-12-08 06:15:42.148747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.148787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.109 [2024-12-08 06:15:42.150362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.150402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.109 [2024-12-08 06:15:42.150436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:28:19.109 [2024-12-08 06:15:42.150472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.150538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.150571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.109 [2024-12-08 06:15:42.150582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:19.109 [2024-12-08 06:15:42.150604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.150652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:19.109 [2024-12-08 06:15:42.150698] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:19.109 [2024-12-08 06:15:42.150747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:19.109 [2024-12-08 06:15:42.150777] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:19.109 [2024-12-08 06:15:42.150913] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.109 [2024-12-08 06:15:42.150928] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.109 [2024-12-08 06:15:42.150941] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:19.109 [2024-12-08 06:15:42.150955] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.109 [2024-12-08 06:15:42.150968] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.109 [2024-12-08 06:15:42.150978] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:19.109 [2024-12-08 06:15:42.150997] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.109 [2024-12-08 06:15:42.151007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.109 [2024-12-08 06:15:42.151017] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.109 [2024-12-08 06:15:42.151028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.151038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.109 [2024-12-08 06:15:42.151049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:28:19.109 [2024-12-08 06:15:42.151059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.151160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.109 [2024-12-08 06:15:42.151174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.109 [2024-12-08 06:15:42.151185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:19.109 [2024-12-08 06:15:42.151194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.109 [2024-12-08 06:15:42.151362] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.109 [2024-12-08 06:15:42.151384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.109 [2024-12-08 06:15:42.151395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.109 [2024-12-08 06:15:42.151498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.109 [2024-12-08 06:15:42.151528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.109 [2024-12-08 06:15:42.151548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.109 [2024-12-08 06:15:42.151558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:19.109 [2024-12-08 06:15:42.151567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.109 [2024-12-08 06:15:42.151577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.109 [2024-12-08 06:15:42.151587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:19.109 [2024-12-08 06:15:42.151596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.109 [2024-12-08 06:15:42.151632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.109 [2024-12-08 06:15:42.151663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.109 [2024-12-08 06:15:42.151692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.109 [2024-12-08 06:15:42.151719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.109 [2024-12-08 06:15:42.151779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.109 [2024-12-08 06:15:42.151807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.109 [2024-12-08 06:15:42.151825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.109 [2024-12-08 06:15:42.151850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:19.109 [2024-12-08 06:15:42.151885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.109 [2024-12-08 06:15:42.151895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.109 [2024-12-08 06:15:42.151905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:19.109 [2024-12-08 06:15:42.151914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.109 [2024-12-08 06:15:42.151933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:19.109 [2024-12-08 06:15:42.151942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151951] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.109 [2024-12-08 06:15:42.151961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.109 [2024-12-08 06:15:42.151970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.109 [2024-12-08 06:15:42.151981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.109 [2024-12-08 06:15:42.151991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.109 [2024-12-08 06:15:42.152001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.109 [2024-12-08 06:15:42.152010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.367 [2024-12-08 06:15:42.152020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.367 [2024-12-08 06:15:42.152030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.367 [2024-12-08 06:15:42.152041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.367 [2024-12-08 06:15:42.152054] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.367 [2024-12-08 06:15:42.152070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:19.367 [2024-12-08 06:15:42.152092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:19.367 [2024-12-08 06:15:42.152102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:19.367 [2024-12-08 06:15:42.152112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:19.367 [2024-12-08 06:15:42.152139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:19.367 [2024-12-08 06:15:42.152150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:19.367 [2024-12-08 06:15:42.152160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:19.367 [2024-12-08 06:15:42.152171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:19.367 [2024-12-08 06:15:42.152181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:19.367 [2024-12-08 06:15:42.152192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:19.367 [2024-12-08 06:15:42.152262] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.367 [2024-12-08 06:15:42.152289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.367 [2024-12-08 06:15:42.152310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.367 [2024-12-08 06:15:42.152353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.368 [2024-12-08 06:15:42.152364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.368 [2024-12-08 06:15:42.152376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.152387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.368 [2024-12-08 06:15:42.152398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:28:19.368 [2024-12-08 06:15:42.152408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.172564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.172640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.368 [2024-12-08 06:15:42.172669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.075 ms 00:28:19.368 [2024-12-08 06:15:42.172691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.172849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.172869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:19.368 [2024-12-08 06:15:42.172884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:19.368 [2024-12-08 06:15:42.172897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.181873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.181915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.368 [2024-12-08 06:15:42.181951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.877 ms 00:28:19.368 [2024-12-08 06:15:42.181961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.182005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.182035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.368 [2024-12-08 06:15:42.182064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.368 [2024-12-08 06:15:42.182074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.182187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.182215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.368 [2024-12-08 06:15:42.182241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:19.368 [2024-12-08 06:15:42.182261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.182410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.182429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.368 [2024-12-08 06:15:42.182440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:28:19.368 [2024-12-08 06:15:42.182449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.187066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.187102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.368 [2024-12-08 06:15:42.187131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.578 ms 00:28:19.368 [2024-12-08 06:15:42.187141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.187345] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:19.368 [2024-12-08 06:15:42.187375] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:19.368 [2024-12-08 06:15:42.187388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.187398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:19.368 [2024-12-08 06:15:42.187409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:19.368 [2024-12-08 06:15:42.187457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.199475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.199694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:19.368 [2024-12-08 06:15:42.199722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.977 ms 00:28:19.368 [2024-12-08 06:15:42.199734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.199903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.199921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:19.368 [2024-12-08 06:15:42.199943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:19.368 [2024-12-08 06:15:42.199953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.200016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.200055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:19.368 [2024-12-08 06:15:42.200066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:19.368 [2024-12-08 06:15:42.200079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.200486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.200507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:19.368 [2024-12-08 06:15:42.200531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:28:19.368 [2024-12-08 06:15:42.200541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.200582] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:19.368 [2024-12-08 06:15:42.200599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.200609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:19.368 [2024-12-08 06:15:42.200619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:19.368 [2024-12-08 06:15:42.200632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.208394] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:19.368 [2024-12-08 06:15:42.208578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.208596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.368 [2024-12-08 06:15:42.208607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.923 ms 00:28:19.368 [2024-12-08 06:15:42.208616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.210728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.210763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:19.368 [2024-12-08 06:15:42.210791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:28:19.368 [2024-12-08 06:15:42.210801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.210882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.210900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.368 [2024-12-08 06:15:42.210910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:19.368 [2024-12-08 06:15:42.210919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.210976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.210998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.368 [2024-12-08 06:15:42.211016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:19.368 [2024-12-08 06:15:42.211026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.211066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:19.368 [2024-12-08 06:15:42.211091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.211111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:19.368 [2024-12-08 06:15:42.211122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:19.368 [2024-12-08 06:15:42.211130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.215629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.215672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.368 [2024-12-08 06:15:42.215695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:28:19.368 [2024-12-08 06:15:42.215705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.215831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.368 [2024-12-08 06:15:42.215849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.368 [2024-12-08 06:15:42.215861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:19.368 [2024-12-08 06:15:42.215870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.368 [2024-12-08 06:15:42.217004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 69.930 ms, result 0 00:28:20.301  [2024-12-08T06:15:44.278Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-08T06:15:45.653Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-08T06:15:46.626Z] Copying: 68/1024 [MB] (23 MBps) [2024-12-08T06:15:47.593Z] Copying: 91/1024 [MB] (22 MBps) [2024-12-08T06:15:48.530Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-08T06:15:49.468Z] Copying: 136/1024 [MB] (22 MBps) [2024-12-08T06:15:50.405Z] Copying: 159/1024 [MB] (23 MBps) [2024-12-08T06:15:51.343Z] Copying: 182/1024 [MB] (23 MBps) [2024-12-08T06:15:52.282Z] Copying: 205/1024 [MB] (22 MBps) [2024-12-08T06:15:53.661Z] Copying: 228/1024 [MB] (22 MBps) [2024-12-08T06:15:54.230Z] Copying: 251/1024 [MB] (23 MBps) [2024-12-08T06:15:55.608Z] Copying: 273/1024 [MB] (22 MBps) [2024-12-08T06:15:56.543Z] Copying: 298/1024 [MB] (24 MBps) [2024-12-08T06:15:57.483Z] Copying: 322/1024 [MB] (23 MBps) [2024-12-08T06:15:58.418Z] Copying: 345/1024 [MB] (23 MBps) [2024-12-08T06:15:59.351Z] Copying: 369/1024 [MB] (24 MBps) [2024-12-08T06:16:00.287Z] Copying: 393/1024 [MB] (23 MBps) [2024-12-08T06:16:01.665Z] Copying: 416/1024 [MB] (23 MBps) [2024-12-08T06:16:02.257Z] Copying: 439/1024 [MB] (22 MBps) [2024-12-08T06:16:03.643Z] Copying: 462/1024 [MB] (23 MBps) [2024-12-08T06:16:04.580Z] Copying: 485/1024 [MB] (23 MBps) [2024-12-08T06:16:05.516Z] Copying: 508/1024 [MB] (23 MBps) [2024-12-08T06:16:06.450Z] Copying: 531/1024 [MB] (22 MBps) [2024-12-08T06:16:07.384Z] Copying: 554/1024 [MB] (22 MBps) [2024-12-08T06:16:08.329Z] Copying: 577/1024 [MB] (23 MBps) [2024-12-08T06:16:09.266Z] Copying: 601/1024 [MB] (23 MBps) [2024-12-08T06:16:10.642Z] Copying: 624/1024 [MB] (23 MBps) [2024-12-08T06:16:11.580Z] Copying: 647/1024 [MB] (23 MBps) [2024-12-08T06:16:12.519Z] Copying: 671/1024 [MB] (23 MBps) [2024-12-08T06:16:13.457Z] Copying: 694/1024 [MB] (22 MBps) [2024-12-08T06:16:14.393Z] Copying: 717/1024 [MB] (23 MBps) [2024-12-08T06:16:15.328Z] Copying: 741/1024 [MB] (23 MBps) [2024-12-08T06:16:16.262Z] Copying: 764/1024 [MB] (23 MBps) [2024-12-08T06:16:17.634Z] Copying: 787/1024 [MB] (23 MBps) [2024-12-08T06:16:18.568Z] Copying: 810/1024 [MB] (23 MBps) [2024-12-08T06:16:19.500Z] Copying: 833/1024 [MB] (22 MBps) [2024-12-08T06:16:20.432Z] Copying: 856/1024 [MB] (23 MBps) [2024-12-08T06:16:21.368Z] Copying: 879/1024 [MB] (22 MBps) [2024-12-08T06:16:22.303Z] Copying: 902/1024 [MB] (23 MBps) [2024-12-08T06:16:23.302Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-08T06:16:24.238Z] Copying: 949/1024 [MB] (23 MBps) [2024-12-08T06:16:25.615Z] Copying: 973/1024 [MB] (23 MBps) [2024-12-08T06:16:26.548Z] Copying: 996/1024 [MB] (23 MBps) [2024-12-08T06:16:27.482Z] Copying: 1019/1024 [MB] (23 MBps) [2024-12-08T06:16:27.743Z] Copying: 1048340/1048576 [kB] (4172 kBps) [2024-12-08T06:16:27.743Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-08 06:16:27.544811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-08 06:16:27.544902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:04.698 [2024-12-08 06:16:27.544940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:04.698 [2024-12-08 06:16:27.544953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-08 06:16:27.548393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:04.698 [2024-12-08 06:16:27.550531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-08 06:16:27.550800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:04.698 [2024-12-08 06:16:27.550844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:29:04.698 [2024-12-08 06:16:27.550857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-08 06:16:27.563567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-08 06:16:27.563627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:04.698 [2024-12-08 06:16:27.563654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.150 ms 00:29:04.698 [2024-12-08 06:16:27.563666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-08 06:16:27.563727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-08 06:16:27.563744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:04.698 [2024-12-08 06:16:27.563756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:04.698 [2024-12-08 06:16:27.563767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-08 06:16:27.563822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-08 06:16:27.563851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:04.698 [2024-12-08 06:16:27.563863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:04.698 [2024-12-08 06:16:27.563879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-08 06:16:27.563908] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:04.698 [2024-12-08 06:16:27.563924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122368 / 261120 wr_cnt: 1 state: open 00:29:04.698 [2024-12-08 06:16:27.563937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.563948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.563959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.563969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.563979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.563990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:04.698 [2024-12-08 06:16:27.564439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.564999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-08 06:16:27.565251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:04.699 [2024-12-08 06:16:27.565262] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b4d07a5-2847-4a96-b9c4-06f714408bf5 00:29:04.699 [2024-12-08 06:16:27.565279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122368 00:29:04.699 [2024-12-08 06:16:27.565290] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122400 00:29:04.699 [2024-12-08 06:16:27.565300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122368 00:29:04.699 [2024-12-08 06:16:27.565311] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:29:04.699 [2024-12-08 06:16:27.565322] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:04.699 [2024-12-08 06:16:27.565332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:04.699 [2024-12-08 06:16:27.565343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:04.699 [2024-12-08 06:16:27.565353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:04.699 [2024-12-08 06:16:27.565362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:04.699 [2024-12-08 06:16:27.565373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.699 [2024-12-08 06:16:27.565388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:04.699 [2024-12-08 06:16:27.565400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:29:04.699 [2024-12-08 06:16:27.565410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-08 06:16:27.566871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.699 [2024-12-08 06:16:27.566907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:04.699 [2024-12-08 06:16:27.566920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:29:04.699 [2024-12-08 06:16:27.566930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-08 06:16:27.567012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.699 [2024-12-08 06:16:27.567027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:04.699 [2024-12-08 06:16:27.567039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:04.699 [2024-12-08 06:16:27.567060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-08 06:16:27.572023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.699 [2024-12-08 06:16:27.572058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:04.699 [2024-12-08 06:16:27.572072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.699 [2024-12-08 06:16:27.572089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-08 06:16:27.572177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.699 [2024-12-08 06:16:27.572203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:04.699 [2024-12-08 06:16:27.572223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.699 [2024-12-08 06:16:27.572234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-08 06:16:27.572318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.699 [2024-12-08 06:16:27.572340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:04.699 [2024-12-08 06:16:27.572352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.699 [2024-12-08 06:16:27.572372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.572402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.572415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:04.700 [2024-12-08 06:16:27.572427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.572436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.581499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.581736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:04.700 [2024-12-08 06:16:27.581883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.581946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.589628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.589845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:04.700 [2024-12-08 06:16:27.589990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.590040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.590105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.590260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:04.700 [2024-12-08 06:16:27.590318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.590357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.590447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.590591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:04.700 [2024-12-08 06:16:27.590661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.590705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.590890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.590957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:04.700 [2024-12-08 06:16:27.591048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.591065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.591106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.591149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:04.700 [2024-12-08 06:16:27.591162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.591172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.591232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.591250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:04.700 [2024-12-08 06:16:27.591262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.591273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.591335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-08 06:16:27.591360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:04.700 [2024-12-08 06:16:27.591372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-08 06:16:27.591382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-08 06:16:27.591533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 49.322 ms, result 0 00:29:05.637 00:29:05.637 00:29:05.637 06:16:28 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:05.637 [2024-12-08 06:16:28.436539] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:05.637 [2024-12-08 06:16:28.436718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93996 ] 00:29:05.637 [2024-12-08 06:16:28.578677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.637 [2024-12-08 06:16:28.614777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.898 [2024-12-08 06:16:28.701671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:05.898 [2024-12-08 06:16:28.701764] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:05.898 [2024-12-08 06:16:28.857917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.857976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:05.898 [2024-12-08 06:16:28.858023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:05.898 [2024-12-08 06:16:28.858034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.858093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.858111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.898 [2024-12-08 06:16:28.858122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:05.898 [2024-12-08 06:16:28.858148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.858176] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:05.898 [2024-12-08 06:16:28.858509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:05.898 [2024-12-08 06:16:28.858542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.858553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.898 [2024-12-08 06:16:28.858564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:29:05.898 [2024-12-08 06:16:28.858574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.859137] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:05.898 [2024-12-08 06:16:28.859164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.859176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:05.898 [2024-12-08 06:16:28.859222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:05.898 [2024-12-08 06:16:28.859234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.859306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.859326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:05.898 [2024-12-08 06:16:28.859343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:05.898 [2024-12-08 06:16:28.859355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.859782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.859952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.898 [2024-12-08 06:16:28.859978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:29:05.898 [2024-12-08 06:16:28.859990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.860091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.860119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.898 [2024-12-08 06:16:28.860140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:05.898 [2024-12-08 06:16:28.860150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.860213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.860234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:05.898 [2024-12-08 06:16:28.860246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:05.898 [2024-12-08 06:16:28.860256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.860300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:05.898 [2024-12-08 06:16:28.861862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.861908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.898 [2024-12-08 06:16:28.861939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:29:05.898 [2024-12-08 06:16:28.861948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.898 [2024-12-08 06:16:28.862000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.898 [2024-12-08 06:16:28.862015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:05.898 [2024-12-08 06:16:28.862026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:05.899 [2024-12-08 06:16:28.862036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.899 [2024-12-08 06:16:28.862078] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:05.899 [2024-12-08 06:16:28.862116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:05.899 [2024-12-08 06:16:28.862163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:05.899 [2024-12-08 06:16:28.862185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:05.899 [2024-12-08 06:16:28.862342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:05.899 [2024-12-08 06:16:28.862358] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:05.899 [2024-12-08 06:16:28.862370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:05.899 [2024-12-08 06:16:28.862391] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:05.899 [2024-12-08 06:16:28.862404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:05.899 [2024-12-08 06:16:28.862415] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:05.899 [2024-12-08 06:16:28.862427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:05.899 [2024-12-08 06:16:28.862436] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:05.899 [2024-12-08 06:16:28.862455] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:05.899 [2024-12-08 06:16:28.862466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.899 [2024-12-08 06:16:28.862476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:05.899 [2024-12-08 06:16:28.862494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:29:05.899 [2024-12-08 06:16:28.862504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.899 [2024-12-08 06:16:28.862604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.899 [2024-12-08 06:16:28.862616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:05.899 [2024-12-08 06:16:28.862627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:05.899 [2024-12-08 06:16:28.862636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.899 [2024-12-08 06:16:28.862753] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:05.899 [2024-12-08 06:16:28.862774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:05.899 [2024-12-08 06:16:28.862786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:05.899 [2024-12-08 06:16:28.862796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:05.899 [2024-12-08 06:16:28.862815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:05.899 [2024-12-08 06:16:28.862834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:05.899 [2024-12-08 06:16:28.862843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:05.899 [2024-12-08 06:16:28.862861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:05.899 [2024-12-08 06:16:28.862870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:05.899 [2024-12-08 06:16:28.862879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:05.899 [2024-12-08 06:16:28.862888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:05.899 [2024-12-08 06:16:28.862898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:05.899 [2024-12-08 06:16:28.862907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:05.899 [2024-12-08 06:16:28.862928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:05.899 [2024-12-08 06:16:28.862937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:05.899 [2024-12-08 06:16:28.862955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.899 [2024-12-08 06:16:28.862973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:05.899 [2024-12-08 06:16:28.862982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:05.899 [2024-12-08 06:16:28.862991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.899 [2024-12-08 06:16:28.863000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:05.899 [2024-12-08 06:16:28.863009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:05.899 [2024-12-08 06:16:28.863018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.899 [2024-12-08 06:16:28.863026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:05.899 [2024-12-08 06:16:28.863035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:05.899 [2024-12-08 06:16:28.863044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.899 [2024-12-08 06:16:28.863052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:05.899 [2024-12-08 06:16:28.863062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:05.899 [2024-12-08 06:16:28.863073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:05.899 [2024-12-08 06:16:28.863082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:05.899 [2024-12-08 06:16:28.863091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:05.899 [2024-12-08 06:16:28.863100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:05.899 [2024-12-08 06:16:28.863109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:05.899 [2024-12-08 06:16:28.863118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:05.899 [2024-12-08 06:16:28.863126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.863135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:05.899 [2024-12-08 06:16:28.863144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:05.899 [2024-12-08 06:16:28.863153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.863162] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:05.899 [2024-12-08 06:16:28.863171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:05.899 [2024-12-08 06:16:28.863181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:05.899 [2024-12-08 06:16:28.863206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.899 [2024-12-08 06:16:28.863227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:05.899 [2024-12-08 06:16:28.863552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:05.899 [2024-12-08 06:16:28.863624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:05.899 [2024-12-08 06:16:28.863671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:05.899 [2024-12-08 06:16:28.863812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:05.899 [2024-12-08 06:16:28.863862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:05.899 [2024-12-08 06:16:28.863902] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:05.899 [2024-12-08 06:16:28.864042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.864166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:05.899 [2024-12-08 06:16:28.864318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:05.899 [2024-12-08 06:16:28.864386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:05.899 [2024-12-08 06:16:28.864572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:05.899 [2024-12-08 06:16:28.864702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:05.899 [2024-12-08 06:16:28.864903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:05.899 [2024-12-08 06:16:28.864966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:05.899 [2024-12-08 06:16:28.865023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:05.899 [2024-12-08 06:16:28.865170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:05.899 [2024-12-08 06:16:28.865192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.865245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.865269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.865280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.865292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:05.899 [2024-12-08 06:16:28.865302] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:05.899 [2024-12-08 06:16:28.865315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.865336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:05.899 [2024-12-08 06:16:28.865347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:05.899 [2024-12-08 06:16:28.865358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:05.900 [2024-12-08 06:16:28.865369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:05.900 [2024-12-08 06:16:28.865382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.865392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:05.900 [2024-12-08 06:16:28.865405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:29:05.900 [2024-12-08 06:16:28.865417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.882561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.882626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.900 [2024-12-08 06:16:28.882652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.062 ms 00:29:05.900 [2024-12-08 06:16:28.882674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.882812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.882834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:05.900 [2024-12-08 06:16:28.882850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:05.900 [2024-12-08 06:16:28.882869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.892817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.892864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.900 [2024-12-08 06:16:28.892903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.844 ms 00:29:05.900 [2024-12-08 06:16:28.892913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.892959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.892985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.900 [2024-12-08 06:16:28.892996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:05.900 [2024-12-08 06:16:28.893005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.893143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.893161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.900 [2024-12-08 06:16:28.893172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:05.900 [2024-12-08 06:16:28.893187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.893392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.893413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.900 [2024-12-08 06:16:28.893434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:29:05.900 [2024-12-08 06:16:28.893444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.898240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.898277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.900 [2024-12-08 06:16:28.898320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:29:05.900 [2024-12-08 06:16:28.898330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.898456] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:05.900 [2024-12-08 06:16:28.898478] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:05.900 [2024-12-08 06:16:28.898490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.898509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:05.900 [2024-12-08 06:16:28.898520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:05.900 [2024-12-08 06:16:28.898529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.910765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.910820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:05.900 [2024-12-08 06:16:28.910852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.217 ms 00:29:05.900 [2024-12-08 06:16:28.910862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.910989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.911005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:05.900 [2024-12-08 06:16:28.911016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:05.900 [2024-12-08 06:16:28.911038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.911098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.911122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:05.900 [2024-12-08 06:16:28.911133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:05.900 [2024-12-08 06:16:28.911147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.911632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.911667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:05.900 [2024-12-08 06:16:28.911687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:29:05.900 [2024-12-08 06:16:28.911708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.911731] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:05.900 [2024-12-08 06:16:28.911747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.911757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:05.900 [2024-12-08 06:16:28.911769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:05.900 [2024-12-08 06:16:28.911786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.920417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:05.900 [2024-12-08 06:16:28.920604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.920622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:05.900 [2024-12-08 06:16:28.920634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.792 ms 00:29:05.900 [2024-12-08 06:16:28.920643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.922994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.923029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:05.900 [2024-12-08 06:16:28.923062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:29:05.900 [2024-12-08 06:16:28.923073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.923151] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:05.900 [2024-12-08 06:16:28.923849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.923878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:05.900 [2024-12-08 06:16:28.923892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:29:05.900 [2024-12-08 06:16:28.923902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.923995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.924015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:05.900 [2024-12-08 06:16:28.924026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:05.900 [2024-12-08 06:16:28.924036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.924077] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:05.900 [2024-12-08 06:16:28.924094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.924123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:05.900 [2024-12-08 06:16:28.924135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:05.900 [2024-12-08 06:16:28.924146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.928143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.928217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:05.900 [2024-12-08 06:16:28.928261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.975 ms 00:29:05.900 [2024-12-08 06:16:28.928272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.928362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.900 [2024-12-08 06:16:28.928387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:05.900 [2024-12-08 06:16:28.928398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:05.900 [2024-12-08 06:16:28.928408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.900 [2024-12-08 06:16:28.929575] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 71.124 ms, result 0 00:29:07.276  [2024-12-08T06:16:31.258Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-08T06:16:32.191Z] Copying: 45/1024 [MB] (23 MBps) [2024-12-08T06:16:33.124Z] Copying: 69/1024 [MB] (24 MBps) [2024-12-08T06:16:34.496Z] Copying: 94/1024 [MB] (24 MBps) [2024-12-08T06:16:35.431Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-08T06:16:36.382Z] Copying: 142/1024 [MB] (24 MBps) [2024-12-08T06:16:37.317Z] Copying: 167/1024 [MB] (24 MBps) [2024-12-08T06:16:38.250Z] Copying: 191/1024 [MB] (24 MBps) [2024-12-08T06:16:39.186Z] Copying: 215/1024 [MB] (24 MBps) [2024-12-08T06:16:40.563Z] Copying: 239/1024 [MB] (24 MBps) [2024-12-08T06:16:41.131Z] Copying: 263/1024 [MB] (24 MBps) [2024-12-08T06:16:42.508Z] Copying: 288/1024 [MB] (24 MBps) [2024-12-08T06:16:43.456Z] Copying: 312/1024 [MB] (24 MBps) [2024-12-08T06:16:44.403Z] Copying: 337/1024 [MB] (24 MBps) [2024-12-08T06:16:45.337Z] Copying: 361/1024 [MB] (24 MBps) [2024-12-08T06:16:46.270Z] Copying: 385/1024 [MB] (24 MBps) [2024-12-08T06:16:47.203Z] Copying: 409/1024 [MB] (24 MBps) [2024-12-08T06:16:48.138Z] Copying: 433/1024 [MB] (24 MBps) [2024-12-08T06:16:49.516Z] Copying: 457/1024 [MB] (24 MBps) [2024-12-08T06:16:50.452Z] Copying: 481/1024 [MB] (24 MBps) [2024-12-08T06:16:51.389Z] Copying: 505/1024 [MB] (24 MBps) [2024-12-08T06:16:52.327Z] Copying: 529/1024 [MB] (23 MBps) [2024-12-08T06:16:53.264Z] Copying: 553/1024 [MB] (23 MBps) [2024-12-08T06:16:54.200Z] Copying: 577/1024 [MB] (23 MBps) [2024-12-08T06:16:55.137Z] Copying: 601/1024 [MB] (24 MBps) [2024-12-08T06:16:56.508Z] Copying: 625/1024 [MB] (24 MBps) [2024-12-08T06:16:57.443Z] Copying: 649/1024 [MB] (23 MBps) [2024-12-08T06:16:58.430Z] Copying: 673/1024 [MB] (24 MBps) [2024-12-08T06:16:59.364Z] Copying: 697/1024 [MB] (23 MBps) [2024-12-08T06:17:00.300Z] Copying: 721/1024 [MB] (23 MBps) [2024-12-08T06:17:01.262Z] Copying: 745/1024 [MB] (24 MBps) [2024-12-08T06:17:02.197Z] Copying: 770/1024 [MB] (24 MBps) [2024-12-08T06:17:03.129Z] Copying: 794/1024 [MB] (23 MBps) [2024-12-08T06:17:04.501Z] Copying: 818/1024 [MB] (24 MBps) [2024-12-08T06:17:05.435Z] Copying: 842/1024 [MB] (24 MBps) [2024-12-08T06:17:06.367Z] Copying: 866/1024 [MB] (23 MBps) [2024-12-08T06:17:07.303Z] Copying: 891/1024 [MB] (25 MBps) [2024-12-08T06:17:08.241Z] Copying: 915/1024 [MB] (23 MBps) [2024-12-08T06:17:09.178Z] Copying: 939/1024 [MB] (23 MBps) [2024-12-08T06:17:10.555Z] Copying: 964/1024 [MB] (24 MBps) [2024-12-08T06:17:11.123Z] Copying: 988/1024 [MB] (24 MBps) [2024-12-08T06:17:11.693Z] Copying: 1012/1024 [MB] (24 MBps) [2024-12-08T06:17:11.693Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-08 06:17:11.591095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.648 [2024-12-08 06:17:11.591316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:48.648 [2024-12-08 06:17:11.591468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:48.648 [2024-12-08 06:17:11.591620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.648 [2024-12-08 06:17:11.591704] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:48.648 [2024-12-08 06:17:11.592692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.648 [2024-12-08 06:17:11.592845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:48.648 [2024-12-08 06:17:11.593134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:29:48.648 [2024-12-08 06:17:11.593187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.648 [2024-12-08 06:17:11.593520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.648 [2024-12-08 06:17:11.593587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:48.648 [2024-12-08 06:17:11.593705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:29:48.648 [2024-12-08 06:17:11.593825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.648 [2024-12-08 06:17:11.593912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.648 [2024-12-08 06:17:11.593979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:48.648 [2024-12-08 06:17:11.594111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:48.648 [2024-12-08 06:17:11.594280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.648 [2024-12-08 06:17:11.594392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.648 [2024-12-08 06:17:11.594521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:48.648 [2024-12-08 06:17:11.594623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:48.648 [2024-12-08 06:17:11.594745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.648 [2024-12-08 06:17:11.594810] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:48.648 [2024-12-08 06:17:11.594925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:48.648 [2024-12-08 06:17:11.594998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:48.648 [2024-12-08 06:17:11.595995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:48.649 [2024-12-08 06:17:11.596813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:48.650 [2024-12-08 06:17:11.596823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:48.650 [2024-12-08 06:17:11.596833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:48.650 [2024-12-08 06:17:11.596852] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:48.650 [2024-12-08 06:17:11.596868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9b4d07a5-2847-4a96-b9c4-06f714408bf5 00:29:48.650 [2024-12-08 06:17:11.596879] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:48.650 [2024-12-08 06:17:11.596889] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 8736 00:29:48.650 [2024-12-08 06:17:11.596899] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 8704 00:29:48.650 [2024-12-08 06:17:11.596910] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0037 00:29:48.650 [2024-12-08 06:17:11.596920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:48.650 [2024-12-08 06:17:11.596931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:48.650 [2024-12-08 06:17:11.596946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:48.650 [2024-12-08 06:17:11.596955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:48.650 [2024-12-08 06:17:11.596964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:48.650 [2024-12-08 06:17:11.596975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.650 [2024-12-08 06:17:11.596986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:48.650 [2024-12-08 06:17:11.597006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:29:48.650 [2024-12-08 06:17:11.597016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.598302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.650 [2024-12-08 06:17:11.598325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:48.650 [2024-12-08 06:17:11.598337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:29:48.650 [2024-12-08 06:17:11.598348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.598430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.650 [2024-12-08 06:17:11.598454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:48.650 [2024-12-08 06:17:11.598465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:48.650 [2024-12-08 06:17:11.598476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.603036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.603209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:48.650 [2024-12-08 06:17:11.603359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.603411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.603575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.603715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:48.650 [2024-12-08 06:17:11.603782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.603873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.604017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.604146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:48.650 [2024-12-08 06:17:11.604282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.604338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.604431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.604534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:48.650 [2024-12-08 06:17:11.604647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.604693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.612930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.613220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:48.650 [2024-12-08 06:17:11.613337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.613469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.620982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.621227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:48.650 [2024-12-08 06:17:11.621357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.621406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.621568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.621630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:48.650 [2024-12-08 06:17:11.621797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.621850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.621979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.622088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:48.650 [2024-12-08 06:17:11.622156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.622318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.622446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.622513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:48.650 [2024-12-08 06:17:11.622652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.622783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.622869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.622963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:48.650 [2024-12-08 06:17:11.623068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.623119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.623319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.623382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:48.650 [2024-12-08 06:17:11.623582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.623637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.623809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.650 [2024-12-08 06:17:11.623933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:48.650 [2024-12-08 06:17:11.623957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.650 [2024-12-08 06:17:11.623968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.650 [2024-12-08 06:17:11.624138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.001 ms, result 0 00:29:48.909 00:29:48.909 00:29:48.909 06:17:11 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:50.815 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:50.815 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:50.815 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:29:50.815 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:51.074 Process with pid 92461 is not found 00:29:51.074 Remove shared memory files 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92461 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92461 ']' 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92461 00:29:51.074 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92461) - No such process 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92461 is not found' 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:29:51.074 06:17:13 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_band_md /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_l2p_l1 /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_l2p_l2 /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_l2p_l2_ctx /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_nvc_md /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_p2l_pool /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_sb /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_sb_shm /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_trim_bitmap /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_trim_log /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_trim_md /dev/hugepages/ftl_9b4d07a5-2847-4a96-b9c4-06f714408bf5_vmap 00:29:51.074 06:17:14 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:29:51.074 06:17:14 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:51.074 06:17:14 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:29:51.074 ************************************ 00:29:51.074 END TEST ftl_restore_fast 00:29:51.074 ************************************ 00:29:51.074 00:29:51.074 real 3m19.084s 00:29:51.074 user 3m5.914s 00:29:51.074 sys 0m14.999s 00:29:51.074 06:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.074 06:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:51.074 Process with pid 84939 is not found 00:29:51.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.074 06:17:14 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:51.074 06:17:14 ftl -- ftl/ftl.sh@14 -- # killprocess 84939 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@950 -- # '[' -z 84939 ']' 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@954 -- # kill -0 84939 00:29:51.074 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84939) - No such process 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84939 is not found' 00:29:51.074 06:17:14 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:51.074 06:17:14 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=94458 00:29:51.074 06:17:14 ftl -- ftl/ftl.sh@20 -- # waitforlisten 94458 00:29:51.074 06:17:14 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@831 -- # '[' -z 94458 ']' 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.074 06:17:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:51.333 [2024-12-08 06:17:14.172578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:51.334 [2024-12-08 06:17:14.172955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94458 ] 00:29:51.334 [2024-12-08 06:17:14.321583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.334 [2024-12-08 06:17:14.364671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.268 06:17:15 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.268 06:17:15 ftl -- common/autotest_common.sh@864 -- # return 0 00:29:52.268 06:17:15 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:52.529 nvme0n1 00:29:52.529 06:17:15 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:52.529 06:17:15 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:52.529 06:17:15 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:52.807 06:17:15 ftl -- ftl/common.sh@28 -- # stores=a7cb78e1-1bba-4634-824c-aa84ab9fe9a9 00:29:52.807 06:17:15 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:52.807 06:17:15 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7cb78e1-1bba-4634-824c-aa84ab9fe9a9 00:29:53.079 06:17:15 ftl -- ftl/ftl.sh@23 -- # killprocess 94458 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@950 -- # '[' -z 94458 ']' 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@954 -- # kill -0 94458 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@955 -- # uname 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94458 00:29:53.079 killing process with pid 94458 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94458' 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@969 -- # kill 94458 00:29:53.079 06:17:15 ftl -- common/autotest_common.sh@974 -- # wait 94458 00:29:53.337 06:17:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:53.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:53.596 Waiting for block devices as requested 00:29:53.596 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:53.855 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:53.855 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:53.855 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:59.121 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:59.121 06:17:21 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:59.121 06:17:21 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:59.121 Remove shared memory files 00:29:59.121 06:17:21 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:59.121 06:17:21 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:59.121 06:17:21 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:59.121 06:17:21 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:59.121 06:17:21 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:59.121 ************************************ 00:29:59.121 END TEST ftl 00:29:59.121 ************************************ 00:29:59.121 00:29:59.121 real 14m20.417s 00:29:59.121 user 16m49.606s 00:29:59.121 sys 1m40.780s 00:29:59.121 06:17:21 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.121 06:17:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:59.121 06:17:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:59.121 06:17:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:59.121 06:17:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:59.121 06:17:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:59.121 06:17:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:59.121 06:17:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:59.121 06:17:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:59.121 06:17:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:59.121 06:17:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:59.121 06:17:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:59.121 06:17:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.121 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:29:59.121 06:17:22 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:59.121 06:17:22 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:59.121 06:17:22 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:59.121 06:17:22 -- common/autotest_common.sh@10 -- # set +x 00:30:00.499 INFO: APP EXITING 00:30:00.499 INFO: killing all VMs 00:30:00.499 INFO: killing vhost app 00:30:00.499 INFO: EXIT DONE 00:30:00.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:01.326 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:01.326 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:01.326 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:01.326 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:01.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:02.153 Cleaning 00:30:02.153 Removing: /var/run/dpdk/spdk0/config 00:30:02.153 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:02.153 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:02.153 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:02.153 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:02.153 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:02.153 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:02.153 Removing: /var/run/dpdk/spdk0 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70186 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70349 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70555 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70638 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70659 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70771 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70781 00:30:02.153 Removing: /var/run/dpdk/spdk_pid70964 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71043 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71115 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71202 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71288 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71322 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71364 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71429 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71535 00:30:02.153 Removing: /var/run/dpdk/spdk_pid71987 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72040 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72093 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72109 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72174 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72182 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72251 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72267 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72315 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72333 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72375 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72385 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72518 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72554 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72638 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72804 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72870 00:30:02.153 Removing: /var/run/dpdk/spdk_pid72901 00:30:02.153 Removing: /var/run/dpdk/spdk_pid73344 00:30:02.153 Removing: /var/run/dpdk/spdk_pid73431 00:30:02.153 Removing: /var/run/dpdk/spdk_pid73529 00:30:02.153 Removing: /var/run/dpdk/spdk_pid73571 00:30:02.154 Removing: /var/run/dpdk/spdk_pid73597 00:30:02.154 Removing: /var/run/dpdk/spdk_pid73675 00:30:02.154 Removing: /var/run/dpdk/spdk_pid74284 00:30:02.154 Removing: /var/run/dpdk/spdk_pid74315 00:30:02.154 Removing: /var/run/dpdk/spdk_pid74806 00:30:02.154 Removing: /var/run/dpdk/spdk_pid74899 00:30:02.154 Removing: /var/run/dpdk/spdk_pid74997 00:30:02.154 Removing: /var/run/dpdk/spdk_pid75039 00:30:02.154 Removing: /var/run/dpdk/spdk_pid75059 00:30:02.154 Removing: /var/run/dpdk/spdk_pid75089 00:30:02.154 Removing: /var/run/dpdk/spdk_pid76925 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77044 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77048 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77060 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77101 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77105 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77117 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77162 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77166 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77178 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77223 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77227 00:30:02.154 Removing: /var/run/dpdk/spdk_pid77239 00:30:02.154 Removing: /var/run/dpdk/spdk_pid78621 00:30:02.154 Removing: /var/run/dpdk/spdk_pid78713 00:30:02.154 Removing: /var/run/dpdk/spdk_pid80107 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81465 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81547 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81623 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81694 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81793 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81862 00:30:02.154 Removing: /var/run/dpdk/spdk_pid81993 00:30:02.154 Removing: /var/run/dpdk/spdk_pid82347 00:30:02.154 Removing: /var/run/dpdk/spdk_pid82374 00:30:02.154 Removing: /var/run/dpdk/spdk_pid82834 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83008 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83097 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83196 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83234 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83260 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83540 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83578 00:30:02.413 Removing: /var/run/dpdk/spdk_pid83637 00:30:02.413 Removing: /var/run/dpdk/spdk_pid84006 00:30:02.413 Removing: /var/run/dpdk/spdk_pid84152 00:30:02.413 Removing: /var/run/dpdk/spdk_pid84939 00:30:02.413 Removing: /var/run/dpdk/spdk_pid85061 00:30:02.413 Removing: /var/run/dpdk/spdk_pid85233 00:30:02.413 Removing: /var/run/dpdk/spdk_pid85325 00:30:02.413 Removing: /var/run/dpdk/spdk_pid85682 00:30:02.413 Removing: /var/run/dpdk/spdk_pid85953 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86280 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86457 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86583 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86619 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86758 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86772 00:30:02.413 Removing: /var/run/dpdk/spdk_pid86808 00:30:02.413 Removing: /var/run/dpdk/spdk_pid87007 00:30:02.413 Removing: /var/run/dpdk/spdk_pid87221 00:30:02.413 Removing: /var/run/dpdk/spdk_pid87679 00:30:02.413 Removing: /var/run/dpdk/spdk_pid88154 00:30:02.413 Removing: /var/run/dpdk/spdk_pid88613 00:30:02.413 Removing: /var/run/dpdk/spdk_pid89163 00:30:02.413 Removing: /var/run/dpdk/spdk_pid89296 00:30:02.413 Removing: /var/run/dpdk/spdk_pid89384 00:30:02.413 Removing: /var/run/dpdk/spdk_pid90067 00:30:02.413 Removing: /var/run/dpdk/spdk_pid90135 00:30:02.413 Removing: /var/run/dpdk/spdk_pid90604 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91030 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91537 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91655 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91696 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91755 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91805 00:30:02.413 Removing: /var/run/dpdk/spdk_pid91868 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92046 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92102 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92159 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92215 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92242 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92316 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92461 00:30:02.413 Removing: /var/run/dpdk/spdk_pid92662 00:30:02.413 Removing: /var/run/dpdk/spdk_pid93092 00:30:02.413 Removing: /var/run/dpdk/spdk_pid93542 00:30:02.413 Removing: /var/run/dpdk/spdk_pid93996 00:30:02.413 Removing: /var/run/dpdk/spdk_pid94458 00:30:02.413 Clean 00:30:02.413 06:17:25 -- common/autotest_common.sh@1451 -- # return 0 00:30:02.413 06:17:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:30:02.413 06:17:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.413 06:17:25 -- common/autotest_common.sh@10 -- # set +x 00:30:02.413 06:17:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:30:02.413 06:17:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.413 06:17:25 -- common/autotest_common.sh@10 -- # set +x 00:30:02.671 06:17:25 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:02.671 06:17:25 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:02.671 06:17:25 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:02.671 06:17:25 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:30:02.671 06:17:25 -- spdk/autotest.sh@394 -- # hostname 00:30:02.672 06:17:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:02.672 geninfo: WARNING: invalid characters removed from testname! 00:30:29.235 06:17:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:31.137 06:17:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:33.668 06:17:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:36.996 06:17:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:39.528 06:18:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:42.062 06:18:04 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:45.345 06:18:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:45.345 06:18:07 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:30:45.345 06:18:07 -- common/autotest_common.sh@1681 -- $ lcov --version 00:30:45.345 06:18:07 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:30:45.345 06:18:07 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:30:45.345 06:18:07 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:30:45.345 06:18:07 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:30:45.345 06:18:07 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:30:45.345 06:18:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:45.345 06:18:07 -- scripts/common.sh@336 -- $ read -ra ver1 00:30:45.345 06:18:07 -- scripts/common.sh@337 -- $ IFS=.-: 00:30:45.345 06:18:07 -- scripts/common.sh@337 -- $ read -ra ver2 00:30:45.345 06:18:07 -- scripts/common.sh@338 -- $ local 'op=<' 00:30:45.345 06:18:07 -- scripts/common.sh@340 -- $ ver1_l=2 00:30:45.345 06:18:07 -- scripts/common.sh@341 -- $ ver2_l=1 00:30:45.345 06:18:07 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:30:45.345 06:18:07 -- scripts/common.sh@344 -- $ case "$op" in 00:30:45.345 06:18:07 -- scripts/common.sh@345 -- $ : 1 00:30:45.345 06:18:07 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:30:45.345 06:18:07 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.345 06:18:07 -- scripts/common.sh@365 -- $ decimal 1 00:30:45.345 06:18:07 -- scripts/common.sh@353 -- $ local d=1 00:30:45.345 06:18:07 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:45.345 06:18:07 -- scripts/common.sh@355 -- $ echo 1 00:30:45.345 06:18:07 -- scripts/common.sh@365 -- $ ver1[v]=1 00:30:45.345 06:18:07 -- scripts/common.sh@366 -- $ decimal 2 00:30:45.345 06:18:07 -- scripts/common.sh@353 -- $ local d=2 00:30:45.345 06:18:07 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:45.345 06:18:07 -- scripts/common.sh@355 -- $ echo 2 00:30:45.345 06:18:07 -- scripts/common.sh@366 -- $ ver2[v]=2 00:30:45.345 06:18:07 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:30:45.345 06:18:07 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:30:45.345 06:18:07 -- scripts/common.sh@368 -- $ return 0 00:30:45.345 06:18:07 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.345 06:18:07 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:30:45.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.345 --rc genhtml_branch_coverage=1 00:30:45.345 --rc genhtml_function_coverage=1 00:30:45.345 --rc genhtml_legend=1 00:30:45.345 --rc geninfo_all_blocks=1 00:30:45.345 --rc geninfo_unexecuted_blocks=1 00:30:45.345 00:30:45.345 ' 00:30:45.345 06:18:07 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:30:45.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.345 --rc genhtml_branch_coverage=1 00:30:45.345 --rc genhtml_function_coverage=1 00:30:45.345 --rc genhtml_legend=1 00:30:45.345 --rc geninfo_all_blocks=1 00:30:45.345 --rc geninfo_unexecuted_blocks=1 00:30:45.345 00:30:45.345 ' 00:30:45.345 06:18:07 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:30:45.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.345 --rc genhtml_branch_coverage=1 00:30:45.345 --rc genhtml_function_coverage=1 00:30:45.345 --rc genhtml_legend=1 00:30:45.345 --rc geninfo_all_blocks=1 00:30:45.345 --rc geninfo_unexecuted_blocks=1 00:30:45.345 00:30:45.345 ' 00:30:45.345 06:18:07 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:30:45.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.345 --rc genhtml_branch_coverage=1 00:30:45.345 --rc genhtml_function_coverage=1 00:30:45.345 --rc genhtml_legend=1 00:30:45.345 --rc geninfo_all_blocks=1 00:30:45.345 --rc geninfo_unexecuted_blocks=1 00:30:45.345 00:30:45.345 ' 00:30:45.345 06:18:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.345 06:18:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:30:45.345 06:18:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:45.345 06:18:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.345 06:18:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.345 06:18:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.345 06:18:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.345 06:18:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.345 06:18:07 -- paths/export.sh@5 -- $ export PATH 00:30:45.345 06:18:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.345 06:18:07 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:45.345 06:18:07 -- common/autobuild_common.sh@479 -- $ date +%s 00:30:45.345 06:18:07 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733638687.XXXXXX 00:30:45.345 06:18:07 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733638687.aFVeoK 00:30:45.345 06:18:07 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:30:45.345 06:18:07 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:30:45.345 06:18:07 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:45.345 06:18:07 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:45.345 06:18:07 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:45.345 06:18:07 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:45.345 06:18:07 -- common/autobuild_common.sh@495 -- $ get_config_params 00:30:45.345 06:18:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:30:45.345 06:18:07 -- common/autotest_common.sh@10 -- $ set +x 00:30:45.345 06:18:07 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:30:45.345 06:18:07 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:30:45.345 06:18:07 -- pm/common@17 -- $ local monitor 00:30:45.345 06:18:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.345 06:18:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.345 06:18:07 -- pm/common@25 -- $ sleep 1 00:30:45.345 06:18:07 -- pm/common@21 -- $ date +%s 00:30:45.345 06:18:07 -- pm/common@21 -- $ date +%s 00:30:45.345 06:18:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733638687 00:30:45.345 06:18:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733638687 00:30:45.345 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733638687_collect-cpu-load.pm.log 00:30:45.345 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733638687_collect-vmstat.pm.log 00:30:45.913 06:18:08 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:30:45.913 06:18:08 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:45.913 06:18:08 -- spdk/autopackage.sh@14 -- $ timing_finish 00:30:45.913 06:18:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:45.913 06:18:08 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:45.913 06:18:08 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:45.913 06:18:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:45.913 06:18:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:45.913 06:18:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:45.913 06:18:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.913 06:18:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:45.913 06:18:08 -- pm/common@44 -- $ pid=96146 00:30:45.913 06:18:08 -- pm/common@50 -- $ kill -TERM 96146 00:30:45.913 06:18:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.913 06:18:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:45.913 06:18:08 -- pm/common@44 -- $ pid=96147 00:30:45.913 06:18:08 -- pm/common@50 -- $ kill -TERM 96147 00:30:45.913 + [[ -n 6032 ]] 00:30:45.913 + sudo kill 6032 00:30:46.180 [Pipeline] } 00:30:46.196 [Pipeline] // timeout 00:30:46.201 [Pipeline] } 00:30:46.213 [Pipeline] // stage 00:30:46.217 [Pipeline] } 00:30:46.226 [Pipeline] // catchError 00:30:46.232 [Pipeline] stage 00:30:46.233 [Pipeline] { (Stop VM) 00:30:46.241 [Pipeline] sh 00:30:46.518 + vagrant halt 00:30:50.732 ==> default: Halting domain... 00:30:56.011 [Pipeline] sh 00:30:56.286 + vagrant destroy -f 00:30:59.572 ==> default: Removing domain... 00:30:59.585 [Pipeline] sh 00:30:59.866 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:59.875 [Pipeline] } 00:30:59.890 [Pipeline] // stage 00:30:59.895 [Pipeline] } 00:30:59.909 [Pipeline] // dir 00:30:59.915 [Pipeline] } 00:30:59.930 [Pipeline] // wrap 00:30:59.936 [Pipeline] } 00:30:59.948 [Pipeline] // catchError 00:30:59.958 [Pipeline] stage 00:30:59.960 [Pipeline] { (Epilogue) 00:30:59.973 [Pipeline] sh 00:31:00.256 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:06.827 [Pipeline] catchError 00:31:06.829 [Pipeline] { 00:31:06.841 [Pipeline] sh 00:31:07.121 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:07.379 Artifacts sizes are good 00:31:07.388 [Pipeline] } 00:31:07.400 [Pipeline] // catchError 00:31:07.409 [Pipeline] archiveArtifacts 00:31:07.416 Archiving artifacts 00:31:07.535 [Pipeline] cleanWs 00:31:07.546 [WS-CLEANUP] Deleting project workspace... 00:31:07.546 [WS-CLEANUP] Deferred wipeout is used... 00:31:07.552 [WS-CLEANUP] done 00:31:07.554 [Pipeline] } 00:31:07.570 [Pipeline] // stage 00:31:07.574 [Pipeline] } 00:31:07.588 [Pipeline] // node 00:31:07.594 [Pipeline] End of Pipeline 00:31:07.631 Finished: SUCCESS